Category Archives: Cloud

Locking Resources with ARM

Sometimes you need to restrict access to an Azure subscription, resource group or a resource in order to prevent accidental deletion or modification of same by other users. With Azure resource Manager you can lock your resources in two levels.

  • CanNotDelete Authorized users can read and modify a resource, but they can’t delete it.
  • ReadOnly Authorized users can read from a resource, but they can’t delete it or perform any actions on it. The permission on the resource is restricted to the Reader role.

The ReadOnly lock be trick in certain situations. For an example a ReadOnly lock placed in a storage account  prevents all users from listing the keys as the list keys operation is handled through a POST request since the  returned keys are available for write operations. When you apply a lock at a parent, all child resources inherit the same lock. For an example if you apply a lock in a resource group all the resources in it will inherit same and even resources you add later will inherit same.

Locking with PowerShell

Following snippet demonstrates how you can apply a resource lock using PowerShell.

New-AzureRmResourceLock –LockLevel <either CanNotDelete or ReadOnly> –LockName <Lock Name> –ResourceName <resource name> –ResourceType <resource type> –ResourceGroupName <resource group name>

Here you should provide the exact resource type. For a complete list for available Azure resource providers please refer this article.

Savision Free Whitepaper – Born in the cloud | Monitoring Linux Workloads with OMS

I have recently authored a whitepaper titled “Born in the Cloud: Monitoring Linux workloads with OMS” published by Savision. This whitepaper focuses on Linux workload monitoring capabilities of Microsoft OMS born-in-the-cloud management suite which is capable of managing and protecting heterogeneous on-premises, cloud and hybrid data centers.

Following are the key areas of discussion in my whitepaper.

  • What Microsoft Operations Management Suite is and how it can simplify data center management.
  • Leveraging OMS Log Analytics to analyze, predict and protect your Linux workloads.
  • Integrating System Center Operations Manager with OMS for extended monitoring.
  • Harnessing the power of Business Service Monitoring of Savision Live Maps Unity in Microsoft OMS.

You can download this FREE whitepaper from here.

About Savision

Savision is the market leader in business service and cloud management solutions for Microsoft System Center. Savision’s monitoring and visualizing capabilities bridge the gap between IT and business by transforming IT data into predictive, actionable and relevant information about the entire cloud and datacenter infrastructure. You can visit  www.savision.com for more information about their product portfolio.

Azure Automation PowerShell ISE add-on is now GA

Azure Automation team has announced the general availability of PowerShell ISE add-on for Azure Automation last week. With this add-on it is easier to author your Azure Automation runbooks using the familiar PowerShell ISE. Below are some of the notable features of this add-on.

  • Use Automation activities (Get-AutomationVariable, Get-AutomationPSCredential, etc) in local PowerShell Workflows and scripts
  • Create and edit Automation assets locally
  • Easily track local changes to runbooks and assets vs the state of these items in an Azure Automation account
  • Sync runbook / asset changes between a local runbook authoring environment and an Azure Automation account
  • Test PowerShell workflows and scripts locally in the ISE and in the automation service

Installing Azure Automation add-on for PowerShell ISE is pretty much straight forward. Although you can install the add on from the GitHub source, Microsoft recommends that you install the add-on from the PowerShell Gallery.

  • In an elevated PowerShell window execute below cmdlet. This will install the add-on only for the current user.

Install-Module AzureAutomationAuthoringToolkit -Scope CurrentUser

  • To automatically load the Azure Automation ISE add-on every time you open the PowerShell ISE execute below cmdlet.

Install-AzureAutomationIseAddOn

  • Also to load the add-on adhoc only when you want, execute  below cmdlet in the PowerShell ISE.

Import-Module AzureAutomationAuthoringToolkit

Managing Cloud Storage with Microsoft Azure Storage Explorer

Today you might be using different third party tools to perform management operations in your Azure storage accounts. CloudXplorer & CloudBerry are some good candidates but they are not free (as in beer). For those Developers who are using Visual Studio 2013/2015 the in-built cloud explorer is a perfect tool but what about the IT Professionals like us? Do we have a good and free alternative?

Microsoft has introduced a standalone version of Microsoft Azure Storage Explorer (Preview) with Azure SDK 2.8 release.  This tool is let’s you to quickly create blob containers, upload file content into blob containers, download files, set properties and metadata, and even create and get SAS keys to control access. Also you can quickly search for containers and individual blobs, and inspect a number of things like metadata and properties on the blobs.

Features in Storage Explorer

  • Mac OS X, Windows, and Linux versions (New in v0.7.20160107)
  • Sign in to view your Storage Accounts – use your Org Account, Microsoft Account, 2FA, etc
  • Add Storage Accounts by account name and key, as well as custom endpoints (New in v0.7.20160107)
  • Add Storage Accounts for Azure China (New in v0.7.20160107)
  • Add blob containers with SAS key (New in v0.7.20160107)
  • Local development storage (Windows-only)
  • ARM and Classic resource support
  • Create and delete blobs, queues, or tables
  • Search for specific blobs, queues, or tables
  • Explore the contents of blob containers
  • View and navigate through directories
  • Upload, download, and delete blobs and folders
  • Open and view the contents text and picture blobs (New in v0.7.20160107)
  • View and edit blob properties and metadata
  • Generate SAS keys
  • Manage and create Stored Access Policies
  • Search for blobs by prefix
  • Drag ‘n drop files to upload or download

This tool currently supports blob operations only and according to Microsoft support for Tables & Queues is coming soon.

Let’s take a look at this tool and see how we can manage Azure Storage using that. First you need to log into your Azure subscription.

Storage-Explorer-1.png

Once you are signed into your Azure subscription you can immediately start navigating through all of your storage accounts.

Storage-Explorer-3.png

You can perform following blob operations by right-clicking on a storage blob.

Storage-Explorer-4.png

Attaching Storage

If you want to connect to storage accounts in a different Azure Subscription or Azure China Storage Accounts or any publicly available storage service that you are not an administrator, you can  right-click on the Storage node and select Attach External Storage. Here you can provide the Account Name & the Access Key to connect to those external storage accounts.

Storage-Explorer-6.png

Also it is possible to connect to a blob container using a Shared Access Signature key and in order to do so the SAS key should provide List permissions for that particular blob.

Storage-Explorer-7.png

You can download this tool from storageexplorer.com

The curious case of Microsoft Azure Stack

Lot of people have been asking me what Microsoft Azure Stack will mean to their cloud journey in 2016. As the product is still invisible to us Microsoft has released some guidance notes about what hardware specifications that will be need to a run a PoC lab for Azure Stack Technical Preview just before Christmas 2015.

In order to run a POC of Azure Stack in a single server following minimum and recommended configuration is suggested by Microsoft.

Component

Minimum

Recommended

Compute: CPU Dual-Socket: 12 Physical Cores Dual-Socket: 16 Physical Cores
Compute: Memory 96 GB RAM 128 GB RAM
Compute: BIOS Hyper-V Enabled (with SLAT support) Hyper-V Enabled (with SLAT support)
Network: NIC Windows Server 2012 R2 Certification required for NIC; no specialized features required Windows Server 2012 R2 Certification required for NIC; no specialized features required
Disk drives: Operating System 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD) 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)
Disk drives: General Azure Stack POC Data 4 disks. Each disk provides a minimum of 140 GB of capacity (SSD or HDD). 4 disks. Each disk provides a minimum of 250 GB of capacity.
HW logo certification Certified for Windows Server 2012 R2

Storage considerations

Data disk drive configuration: All data drives must be of the same type (SAS or SATA) and capacity.  If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided)
HBA configuration options:
     1. (Preferred) Simple HBA
2. RAID HBA – Adapter must be configured in “pass through” mode
3. RAID HBA – Disks should be configured as Single-Disk, RAID-0
Supported bus and media type combinations

  •          SATA HDD
  •          SAS HDD
  •          RAID HDD
  •          RAID SSD (If the media type is unspecified/unknown*)
  •          SATA SSD + SATA HDD**
  •          SAS SSD + SAS HDD**

* RAID controllers without pass-through capability can’t recognize the media type. Such controllers will mark both HDD and SSD as Unspecified. In that case, the SSD will be used as persistent storage instead of caching devices. Therefore, you can deploy the Microsoft Azure Stack POC on those SSDs.

** For tiered storage, you must have at least 3 HDDs.

Example HBAs: LSI 9207-8i, LSI-9300-8i, or LSI-9265-8i in pass-through mode

Furthermore Microsoft suggests that the Dell R630 and the HPE DL 360 Gen 9 servers can be utilized for this effort as both of these models have been in market for some time, but you can always go for another vendor/model that fits the above specification.

From below you can listen to Jeffery Snover himself explaining what is behind the scenes in Azure Stack in development.

Azure Service Update | Static Public IP addresses for Azure VMs are now available

As you may already aware earlier this month Microsoft has announced the general availability of the new Azure portal with Azure Resource Manager Deployment (ARM) model. Most of the Azure services that are available in the Classic Deployment model are now available in ARM model except a very few. However Microsoft introduces regular enhancements to the fabric providing a much smoother cloud experience to the customers.

In the recent service update Microsoft has enabled the capability for a Static public IP addresses to be assigned directly to a virtual machine (VM) created using the ARM  deployment model. Previously we were only able to assign a  dynamic public IP address to the network adapter of the VM.

There is a difference in classic deployment model where a static public IP address can only be assigned to cloud services. which is called as a reserved IP. But you can assign a Instance Level Dynamic PIP to a VM in classic model and that hasn’t been changed with this service update.

If you are new to Azure Networking following references will be much valuable to decide how you want to plan networking & connectivity in the cloud.

It’s a Century | Remote mailbox move to Office 365 fails

Today is a special day. Yes it is a Century indeed.

This post marks the 100th post in my technical blog and it has been an absolute pleasure to see that the community is benefiting from my experience in cloud & related technologies. I must thank all my fellow comrades in technical communities worldwide who always encourage me to share the new stuff to promote awareness. Also my MVP colleagues, You guys rock! Kudos to you all for being my role models. Years ago I too was an apprentice but the community paved the way for me to become a MVP and ignite the same passion for knowledge that I have throughout the world.

And by the way 100th post is break-fix solution so let’s see how we can troubleshoot an Exchange Hybrid deployment with Office 365.

I’ve encountered a strange issue during a Exchange 2013 + Office 365 hybrid deployment few days back. After a successful completion of a hybrid configuration I decided to move few mailboxes to office 365. I chose a mailbox which is less than 200 MB in size but even after 4 hours it was still in syncing state.

Looking at the mailbox move log I could see below error is happening all the time.

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​10/19/2015 4:25:56 AM [TY1PR04MB0893] ” created move request.
10/19/2015 4:27:20 AM [HK2PR04MB1010] The Microsoft Exchange Mailbox Replication service ‘HK2PR04MB1010.apcprd04.prod.outlook.com’ (15.1.300.10 caps:7FFF) is examining the request.
10/19/2015 4:27:20 AM [HK2PR04MB1010] Connected to target mailbox ‘xxxx.onmicrosoft.com\6be5af13-590d-4461-b6bf-07a39d6767c9 (Primary)’, database ‘APCPR04DG064-db006’, Mailbox server ‘HK2PR04MB1010.apcprd04.prod.outlook.com’ Version 15.1 (Build 300.0).
10/19/2015 4:27:21 AM [HK2PR04MB1010] Connected to source mailbox ‘xxxx.onmicrosoft.com\6be5af13-590d-4461-b6bf-07a39d6767c9 (Primary)’, database ‘XXXX’, Mailbox server ‘XXXX’ Version 15.0 (Build 1076.0), proxy server ‘XXXX’ 15.0.1076.6 caps:1F7FFFFFCB07FFFF.
10/19/2015 4:27:23 AM [HK2PR04MB1010] Request processing started.
10/19/2015 4:27:23 AM [HK2PR04MB1010] Source mailbox information:
Regular Items: 2075, 139.8 MB (146,642,417 bytes)
Regular Deleted Items: 1447, 122.6 MB (128,550,697 bytes)
FAI Items: 48, 246.9 KB (252,781 bytes)
FAI Deleted Items: 0, 0 B (0 bytes)
10/19/2015 4:27:23 AM [HK2PR04MB1010] Cleared sync state for request 6be5af13-590d-4461-b6bf-07a39d6767c9 due to ‘CleanupOrphanedMailbox’.
10/19/2015 4:27:23 AM [HK2PR04MB1010] Mailbox signature will not be preserved for mailbox ‘xxxx.onmicrosoft.com\6be5af13-590d-4461-b6bf-07a39d6767c9 (Primary)’. Outlook clients will need to restart to access the moved mailbox.
10/19/2015 4:27:25 AM [HK2PR04MB1010] Stage: CreatingFolderHierarchy. Percent complete: 10.
10/19/2015 4:27:26 AM [HK2PR04MB1010] Initializing folder hierarchy from mailbox ‘xxxx.onmicrosoft.com\6be5af13-590d-4461-b6bf-07a39d6767c9 (Primary)’: 56 folders total.
10/19/2015 4:27:26 AM [HK2PR04MB1010] Folder creation progress: 0 folders created in mailbox ‘xxxx.onmicrosoft.com\6be5af13-590d-4461-b6bf-07a39d6767c9 (Primary)’.
10/19/2015 4:28:30 AM [HK2PR04MB1010] Folder hierarchy initialized for mailbox ‘xxxx.onmicrosoft.com\6be5af13-590d-4461-b6bf-07a39d6767c9 (Primary)’: 55 folders created.
10/19/2015 4:28:30 AM [HK2PR04MB1010] Stage: CreatingInitialSyncCheckpoint. Percent complete: 15.
10/19/2015 4:28:30 AM [HK2PR04MB1010] Initial sync checkpoint progress: 0/56 folders processed. Currently processing mailbox ‘xxxx.onmicrosoft.com\6be5af13-590d-4461-b6bf-07a39d6767c9 (Primary)’.
10/19/2015 4:29:10 AM [HK2PR04MB1010] Initial sync checkpoint completed: 49 folders processed.
10/19/2015 4:29:10 AM [HK2PR04MB1010] Stage: LoadingMessages. Percent complete: 20.
10/19/2015 4:30:06 AM [HK2PR04MB1010] Messages have been enumerated successfully. 3569 items loaded. Total size: 262.4 MB (275,191,505 bytes).
10/19/2015 4:30:06 AM [HK2PR04MB1010] Stage: CopyingMessages. Percent complete: 25.
10/19/2015 4:30:06 AM [HK2PR04MB1010] Copy progress: 0/3569 messages, 0 B (0 bytes)/262.4 MB (275,191,505 bytes), 32/56 folders completed.
10/19/2015 4:33:06 AM [HK2PR04MB1010] Transient error DataExportTransientException has occurred. The system will retry (1/60).
10/19/2015 4:34:41 AM [HK2PR04MB1010] The Microsoft Exchange Mailbox Replication service ‘HK2PR04MB1010.apcprd04.prod.outlook.com’ (15.1.300.10 caps:7FFF) is examining the request.
10/19/2015 4:34:42 AM [HK2PR04MB1010] Connected to target mailbox ‘xxxx.onmicrosoft.com\6be5af13-590d-4461-b6bf-07a39d6767c9 (Primary)’, database ‘APCPR04DG064-db006’, Mailbox server ‘HK2PR04MB1010.apcprd04.prod.outlook.com’ Version 15.1 (Build 300.0).
10/19/2015 4:34:49 AM [HK2PR04MB1010] Connected to source mailbox ‘xxxx.onmicrosoft.com\6be5af13-590d-4461-b6bf-07a39d6767c9 (Primary)’, database ‘XXXX’, Mailbox server ‘XXXX’ Version 15.0 (Build 1076.0), proxy server ‘XXXX’ 15.0.1076.6 caps:1F7FFFFFCB07FFFF.
10/19/2015 4:34:51 AM [HK2PR04MB1010] Request processing continued, stage LoadingMessages.
10/19/2015 4:35:48 AM [HK2PR04MB1010] Messages have been enumerated successfully. 3569 items loaded. Total size: 262.4 MB (275,191,505 bytes).
10/19/2015 4:35:48 AM [HK2PR04MB1010] Stage: CopyingMessages. Percent complete: 25.
10/19/2015 4:35:48 AM [HK2PR04MB1010] Copy progress: 0/3569 messages, 0 B (0 bytes)/262.4 MB (275,191,505 bytes), 32/56 folders completed.
10/19/2015 4:39:04 AM [HK2PR04MB1010] Transient error DataExportTransientException has occurred. The system will retry (1/60).

Surprisingly when I create a new mailbox in Exchange on-premises, initiate a directory synchronization and try to move that mailbox to the Office 365 tenant no issues were found. But whenever I tried to move existing mailboxes the issue is persistent. I though of reducing the mailbox size but in this scenario it was not an option.

Solution | Tweaking the MsExchangeMailboxReplication.exe.config

According to Microsoft Transient errors are usually connectivity issues to the MRSProxy. The remote move is a pull request from Office 365 tenant to on-premises to be exact. So any timeout might break the operation and will result in a never ending retry loop like this.

The MRSProxy config file is available in C:\Program Files\Microsoft\Exchange Server\V15\Bin\MsExchangeMailboxReplication.exe.config contains several parameter that we can tune to avoid such issues.

  • Stop the Microsoft Exchange Mailbox Replication Service in all CAS/MBX servers. Without that you may not be allowed perform the next step.
  • Edit the MsExchangeMailboxReplication.exe.config as below.

<MRSProxyConfiguration

IsEnabled=”true”

MaxMRSConnections=”100″

DataImportTimeout=”00:01:00″ />

  • Change the above DataImportTimeout value to “00:10:00” (10 minutes)
  • Start the Microsoft Exchange Mailbox Replication Service
  • Perform iisreset in CAS servers

Optionally you can set the ExportBufferSizeOverrideKB to 7500 in the same config file though it’s an optional optimization .  This is more of an optimization though and should not be necessary. This reduces the number of migration calls, especially for larger mailboxes and reduces the time spent in network latency. Be mindful that you need to have at least Exchange 2013 SP1 in your CAS servers to be able to edit this value.

If your WAN network traffic is highly utilized and unstable, above changes can save you a good number of troubleshooting hours for remote moves in your hybrid Exchange setup.

DPM 2012 R2 UR7 Re-released

It’s been a while since my last blog post. I’m working on a DPM 2012 R2 test lab these days which I’ve planned to update to the latest UR version. When I checked for the latest UR7 got to know that the bits have been re-released.

As for the DPM team there is an issue in DPM 2012 R2 UR7 released on 28.07.2015 which causes expired recovery points on the disk were not getting cleaned up, resulting an increase in DPM recovery point volume after installing UR7. This re-release has addressed this concern and you can download the upadted bits via DPM 2012 R2 UR7 KB or Microsoft Update Catalog as of today.

OK I have updated to UR7 before 21.08.2015. Now what?

For those who are facing this dilemma should know that the re-released UR7 is not pushed via Microsoft Update and advised to manually install the new package  on the DPM Servers with older UR7 package installed. The installation process will automatically execute pruneshadowcopiesDpm2010.ps1 PowerShell script which contains the fix.

Post-deployment Tips

There is no change in the DPM version (4.2.1338.0) in this re-release and it will remain same after the update. Also you will have to update the Azure Backup Agent to latest version (2.0.8719.0) prior installing DPM UR7 to ensure the integrity of your cloud backups after this release.

For those who like me updating to UR7 the old fashion way (wait for a month or two, lookout for bugs and then update) you’ve got nothing to worry.

Savision Blog | New Webinars & Whitepapers by MVP Alessandro Cardoso

Savision is driving a huge community effort to raise the awareness about Cloud Management strategies. One of my MVP colleagues Alessandro Cardoso has published couple of whitepapers on Cloud formulation approaches. These are valuable resources for any individual or organization planning to move into the cloud.

“Investing in the Cloud: Assessing IT & Business Requirements” Whitepaper & Webinar

Cloud is agile and scalable with a pay-as-you-consume cost model. People are embracing cloud at a rapid pace. This whitepaper & webinar focuses on planning and assessing business and IT requirements when you are investing in cloud.

“The Technical Challenges of Cloud Adoption” Whitepaper & Webinar

This series focuses on technical challenges such as providing optimal deployment, management, automation and user experience when organizations shift their focus to host IT workloads in the cloud for different business units.

You can follow Alessandro via his blog cloudtidings.com or on Twitter @Cloudtidings

 

 

Connect Windows 10 Devices to Azure AD | Part 2

In my last post I explained how to join a Windows 10 device to Azure AD. Now it’s time to check how we can enforce organizational policies to same. Before that let me logoff from my standard user account and come back to log on prompt.

Win10 Join Azure AD 12You can see that my organizational account is displayed in the log on screen. After I have logged in it will take some time to setup the Apps and will test your patience (lol kidding). Notice that in-between this time you will be prompted to accept security policies enforced by your Azure AD tenant. Click Enforce these policies button to accept.Win10 Join Azure AD 10Now to test the functionality once logged in I’m going to launch the default Mail application. Voilà! my Office 365 e-mail account is already configured there.Win10 Join Azure AD 13Since my Office 365 Azure AD tenant has been on-boarded to my Azure account I can actually inspect the the devices that I have enrolled. For that I’m going to view the properties of that particular user.Win10 Join Azure AD 11Okay well where are those security polices I talked about. By default when you enroll a Windows 10 device policies such as password expiration will be provided by Azure AD. But if you need more granular control like device sweep, selective wipe, full wipe you’ll have to integrate Microsoft Intune with it. My office 365 E3 tenant already has MDM capability enabled with Intune. Therefore I can modify policies as I want from Office 365 Admin center.Win10 Join Azure AD 14Although it may seem a long shot Microsoft’s ultimate goal is to enable mobility for all users. I think this will be a huge leap assisting that vision.