Wednesday, March 01, 2017

The method personal_unlockAccount does not exist/is not available

Wow, it's been almost 5 long years since I wrote a blog post... Life is busy at Microsoft, mostly in the IoT space but I am back, and I am back for good reason. I felt the need to document my learnings on Blockchain (the technology bitcoin is based on), because there does not seem to be a whole lot of technical information out there right now when you run into problems.

(Skip to the section "The Fix" if you know the background to this problem, as per the title)

If you are new to Blockchain (when I use the word chain from here on in, it means the same thing as Blockchain), then I suggest ramping up using this link. I will write many other introductory posts to Blockchain often specifically focused at running on Azure but for now back to the topic of this post: The method personal_unlockAccount does not exist/is not available. If you are using the Nethereum SDK to connect to your chain, the exception you will get back is: Nethereum.JsonRpc.Client.RpcResponseException.

Before you can write to the chain, that is publish a smartcontract, create an account etc. you have to "unlock" the account that you are using to write to the chain. This problem is very specific to the Ethereum Blockchain and by default the transaction nodes running Ethereum will be locked down, meaning you cannot remote RPC into the transaction nodes. Of course you can attach to the geth process on each node and unlock the accounts this way but to do it programmatically, please continue to read the rest of this post. In Azure we use the Golang implementation (called geth).

This problem you may be getting is also independent on the client library you are using and there are many out there. I am actually using Nethereum - a .NET implementation of the client, you can get that here: https://github.com/Nethereum/Nethereum If you like JavaScript then the web3 client library may interest you instead.

The fix
SSH/RDP into one of your transaction nodes on the Ethereum network, this needs to be done on every transaction node within the network.

Edit file: /home//start-private-blockchain.sh from below (line 54):

nohup geth --datadir $GETH_HOME -verbosity $VERBOSITY --bootnodes $BOOTNODE_URLS --maxpeers $MAX_PEERS --nat none --networkid $NETWORK_ID --identity $IDENTITY $MINE_OPTIONS $FAST_SYNC --rpc --rpcaddr "$IPADDR" --rpccorsdomain "*" >> $GETH_LOG_FILE_PATH 2>&1 &

To this:

nohup geth --datadir $GETH_HOME -verbosity $VERBOSITY --bootnodes $BOOTNODE_URLS --maxpeers $MAX_PEERS --nat none --networkid $NETWORK_ID --identity $IDENTITY $MINE_OPTIONS $FAST_SYNC --rpc --rpcaddr "$IPADDR" --rpccorsdomain "*" --rpcapi "eth,net,web3,admin,personal" >> $GETH_LOG_FILE_PATH 2>&1 &

Notice the additional parameter that has been added to the above command: --rpcapi "eth,net,web3,admin,personal". This will enable all the API's over the RCP JSON protocol which is what the Ethereum clients use to unlock accounts etc. The Ubuntu image in Azure does not enable these API's by default which is why we need to turn them on here. You need to do this for every transaction node within your Ethereum network.

Now reboot your OS by running command:

$sudo reboot



To test if this worked, you can try to run some code against this transaction node, for example try creating an account, or run the following command:

ps aux | grep geth

If all worked well, then you should get something back that looks like the below (notice the additional rpcapi parameter):

geth --datadir /home/simon/.ethereum -verbosity 4 --bootnodes enode://_mynode_and_IP/port --bootnodes enode://_mybootnodes_and_port_ --maxpeers 25 --nat none --networkid _my_id_ --identity bled72mw2-tx0 --fast --rpc --rpcaddr _myinternal_IP_ --rpccorsdomain "*" --rpcapi "eth,net,web3,admin,personal"

Hopefully this worked for you and happy Blockchaining!

Thursday, July 12, 2012

Could not load file or assembly 'Microsoft.Data.Services.Client, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

Have you seen the following error recently while trying to use the new Windows Azure Media Services features:

Could not load file or assembly 'Microsoft.Data.Services.Client, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

I received the above error while attempting to use the Windows Azure Media Services SDK v1.0 CloudMediaContext class:

return new CloudMediaContext("myAccount", "myAccountKey");

After looking at the pre-binding probe log, I found this:

=== Pre-bind state information ===
LOG: User = \
LOG: DisplayName = Microsoft.Data.Services.Client, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
 (Fully-specified)
LOG: Appbase =
file:///C:/Workspaces/Windows Azure Media Services SDK Samples Project/C#/bin/Debug/
LOG: Initial PrivatePath = NULL
Calling assembly : Microsoft.WindowsAzure.MediaServices.Client, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35.
===
LOG: This bind starts in default load context.
LOG: Using application configuration file: C:\Workspaces\Windows Azure Media Services SDK Samples Project\C#\bin\Debug\MediaServicesSDKSamples.vshost.exe.Config
LOG: Using host configuration file:
LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config.
LOG: Post-policy reference: Microsoft.Data.Services.Client, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
LOG: Attempting download of new URL
file:///C:/Workspaces/Windows Azure Media Services SDK Samples Project/C#/bin/Debug/Microsoft.Data.Services.Client.DLL.
LOG: Attempting download of new URL
file:///C:/Workspaces/Windows Azure Media Services SDK Samples Project/C#/bin/Debug/Microsoft.Data.Services.Client/Microsoft.Data.Services.Client.DLL.
LOG: Attempting download of new URL
file:///C:/Workspaces/Windows Azure Media Services SDK Samples Project/C#/bin/Debug/Microsoft.Data.Services.Client.EXE.
LOG: Attempting download of new URL
file:///C:/Workspaces/Windows Azure Media Services SDK Samples Project/C#/bin/Debug/Microsoft.Data.Services.Client/Microsoft.Data.Services.Client.EXE.


It seems the Media Services DLL is dependent on the out-of-band WCF Data Services OData assembly v5.0.

There is a lot of talk on the web that these issues is due to the fact that Windows Azure Media Servers v1.0 only works with the Windows Azure SDK v1.6 and will not work with Windows Azure SDK v1.7 installed as well, for a thread on this, see here: http://social.msdn.microsoft.com/Forums/en-US/MediaServices/thread/d02e268e-e30a-481d-acb7-138646a0c4fb

But..I didn't read the pre-requisite documentation correctly and once I installed WCF Data Services v5.0 all was ok:

Figure 1: WCF Data Services 5 installer
You can find the pre-requisites here: http://msdn.microsoft.com/en-us/library/jj129588


Monday, June 11, 2012

Microsoft server software support for Windows Azure Virtual Machines

I keep getting asked questions as to what and what will not run on the new Windows Azure Virtual Machine services. This is in regards to Micosoft server, features and roles on Microsoft platforms.

This Microsoft support article will hopefully clear up this confusion (note this is suject to change): http://support.microsoft.com/kb/2721672

Tuesday, May 01, 2012

SQL Azure vs Microsoft SQL Server

I've been working on a response to an RFI recently and I needed to list the limitations of SQL Azure over SQL Server.

There are limitations, but there is one major advantage of SQL Azure over SQL Server and that is clustering. SQL Azure out of the box gives me a three node active cluster without doing a single thing. This is because for every write operation on SQL Azure, you get the same data written to two other databases within the same data centre.

As we are on the subject of advantages, there is actually one-other major advantage, and that is it makes it easier to synchronise your SQL Azure database to provide synchronisation with another data centre for resilience using the DataSync component which is based on the SyncFramework which is very easy to use and setup. Synchronising SQL Server is a little more complex as you have more choice albeit, you could use the SyncFramework also, in a clustered environment you'd normally use mirroring or in BizTalk server log shipping. Now you'd use AlwaysOn Availability Groups provided by SQL Server 2012 - not yet released BTW.

I found this really good technet resource that compares SQL Azure features with SQL Server: http://social.technet.microsoft.com/wiki/contents/articles/996.compare-sql-server-with-sql-azure.aspx#Scalability

But for convenience, I provided the table here on my blog (for my benefit in the future too!):

Feature
SQL Server (On-premise)
SQL Azure
Mitigation
Data Storage
No size limits as such
* The Web Edition Database is best suited for small Web applications and workgroup or departmental applications. This edition supports a database with a maximum size of 1 or 5 GB of data.
* The Business Edition Database is best suited for independent software vendors (ISVs), line-of-business (LOB) applications, and enterprise applications. This edition supports a database of up to 150 GB of data, in increments of 10 GB.
Exact size and pricing information can be obtained at Pricing Overview
· An archival process can be created where older data can be migrated to another database in SQL Azure or on premise.
· Because of above size constraints, one of the recommendations is to partition the data across databases. Creating multiple databases will allow you take maximum advantage of the computing power of multiple nodes. The biggest value in the Azure model is the elasticity of being able to create as many databases as you need, when your demand peaks and delete/drop the databases as your demand subsides. The biggest challenge is writing the application to scale across multiple databases. Once this is achieved, the logic can be extended to scale across N number of databases.
Edition
· Express
· Workgroup
· Standard
· Enterprise
* Web Edition
* Business Edition
For more information, see Accounts and Billing in SQL Azure

Connectivity
· SQL Server Management Studio
· SQLCMD
* The SQL Server Management Studio from SQL Server 2008 R2 and SQL Server 2008 R2 Express can be used to access, configure, manage and administer SQL Azure. Previous versions of SQL Server Management Studio are not supported.
* SQLCMD

For more information, seeTools and Utilities Support Description: http://social.technet.microsoft.com/wiki/cfs-file.ashx/__key/communityserver-components-sitefiles/10_5F00_external.png .
Data Migration

For more information, seeMigrating Databases to SQL Azure

Authentication
* SQL Authentication
* Windows Authentication
SQL Server Authentication only
Use SQL Server authentication
Schema
No such limitation
SQL Azure does not support heaps. ALL tables must have a clustered index before data can be inserted.
Check all scripts to make sure all table creation scripts include a clustered index. If a table is created without a clustered constraint, a clustered index must be created before an insert operation is allowed on the table.
TSQL Supportability
Certain Transact-SQL commands are fully supported; some are partially supported while others are unsupported.
* Partially Supported Transact-SQL: http://msdn.microsoft.com/en-us/library/ee336267.aspx
“USE” command
Supported
In Microsoft SQL Azure Database, the USE statement does not switch between databases. To change databases, you must directly connect to the database.
In SQL Azure, each of the databases created by the user may not be on the same physical server. So the application has to retrieve data separately from multiple databases and consolidate at the application level.
Transactional Replication
Supported
Not supported
You can use BCP or SSIS to get the data out on-demand into an on premise SQL Server. When this article is being updated, the Customer Technology Preview of SQL Azure Data Sync is also available. You can use it to keep on-premise SQL Server and SQL Azure in sync, as well as two or more SQL Azure servers.
For more information on available migration options, seeMigrating Databases to SQL Azure Description: http://social.technet.microsoft.com/wiki/cfs-file.ashx/__key/communityserver-components-sitefiles/10_5F00_external.png .
Log Shipping
Supported
Not supported
Database Mirroring
Supported
Not supported
SQL Agent
Supported
Cannot run SQL agent or jobs on SQL Azure
You can run SQL Server Agent on your on-premise SQL Server and connect to SQL Azure Database.
Server options
Supported
Some system views are supported
For more information, see System Views (SQL Azure Database) on MSDN.
The idea is most system level metadata is disabled as it does not make sense in a cloud model to expose server level information
Connection Limitations
N/A
In order to provide a good experience to all SQL Azure customers, your connection to the service may be closed. For more information, see General Guidelines and Limitations on MSDN and SQL Azure Connection Management.
SQL Server Integration Services (SSIS)
Can run SSIS on-premise
SSIS service not available on Azure platform
Run SSIS on site and connect to SQL Azure with ADO.NET provider


Saturday, April 28, 2012

My first week at Microsoft

***non-techie - personal post***

For those who don't know, I recently joined Microsoft and work on the World Wide Windows Azure Centre of Excellence team - a small team of highly skilled technical people, many of which have been at Microsoft for a number of years in different organisations. We are known internally as "Azure COE".

The team is based out of Redmond, WA but I live in the UK and my responsibility is the EMEA region but no doubt, I will get involved in other regions as well once I get settled in. This is no doubt the best job I have ever had, I know it is early days but that is my feeling right now. I am no longer billable, so my responsibilities are to help customers, internal employees and partners be successful in deploying Windows Azure as a technology solution from an architectural and technology perspective.

It is up to me to define what I will take ownership of, this truly excites me. I will take a few weeks to determine gaps, strategic direction and areas where I can focus more of my time and this is what I find the best thing about this role.

It is also a very exciting time for anyone in our industry to be involved in cloud computing. Everyone that is, ISV's, SI's (System Integrator's), CIO's, CTO's should at least consider a cloud platform as a tool to be used in the vast tool set for any business problem that can be solved using technology today. It might not be the correct technology to use in all cases but it should be considered like any other.

My first day, I arrived in Reading, Thames Valley Park, UK (known as Microsoft UK). I was quite tired when I arrived after a 3.5 hr drive due to the M25 London orbital motorway (known here as the car-park) being congested as ever. I actually found myself watching BBC News 24 on my Samsung Galaxy S2 device while parked up on the motorway! It saved me from extreme boredom.

After checking-in, I met up with Tim Furnell who was acting as "host manager" to bed me into Microsoft nicely. Tim did a good job of getting all the things I needed and getting connected to corp net.

The next day I worked from home and did a lot of NDA stuff that I can't talk about but you will eventually hear about in the future. This is another cool thing about the job. I get to hear about, work-on technology before it's even announced publicly. This enables me to make better decisions for our customers and to be more well informed in what's coming. I also get to address what is known as "whitespace" areas which are gaps in Microsoft products.

That's another thing they love here - home-working, office-working, coffee-shop wherever, it doesn't matter where you work so long as you get your work done. Not a 9-5 person? - doesn't matter! They promote the use of on-line conference calls all the time. I have calls pretty much every day over Lync (internal VoIP).

Then the next two days I spent time with my colleague Dennis Mulder from the Netherlands. After these two days my head was rather full of information. You don't realize just how big and how much information is at your finger-tips until you work here.

Think of the biggest library you have ever set foot-in, multiply it by about 1000 then you might come close to the vast quantity of information available here at Microsoft.

Microsoft are very supportive in pretty much every area you can think of. But the most important to me is learning, they totally promote it. You'd think this would be the same for all companies, but many other companies are interested in your billable time first and foremost and not you as an individual and your aspirations and career goals.

I'm also finding everything highly efficient and smooth running. It's because everything is automated - where it can be. Example, applying for a AMEX corp card, I applied using //... (DNS) is used for everything there is a name for everything i.e. //training //sqlsvr etc... then after applying, the next day the card turned up!

Booking travel, hotels, cars is automated and billed to your cost centre to reduce admin expense claims. So far the next 3 months will be fairly busy and a bit of international travel for some really interesting events.

Dennis and I will be running the Tech Ed 2012 EMEA Pre-Conference Azure bootcamp workshop in June, so if you're going we'll see you there!

And finally, if you're from Microsoft reading this and need the Azure CoE's help within EMEA, ping me internally (note we are free - non-UBI).

Saturday, April 14, 2012

SQL Server setup does not support the language of the OS or does not have ENU localized files

I encountered a very strange error this morning when attempting to install Microsoft SQL Server R2 EN Standard Edition from my file server which I have successfully done many times before - mainly from VM's. 

Today, when I attempted to run the SQL installer on a fairly new Windows 7 EN OS physical machine, I got the following error:



I tried some of the workarounds mentioned on Connect above, none worked. So as my machine is a dev machine, I'm happy to put the developer edition on which doesn't appear to be localized?!?

But before doing this I really wanted to figure out why I was getting this problem. I am running an English version of Windows that is set to the United Kingdom as the locale. Why is this not working for me!?

So then as I was attempting to install from a network file server, I thought I'd burn to disc and then try it from a DVD. So I fired up Windows 7 USB/DVD Download Tool (get the code from here: http://wudt.codeplex.com/). Ran it and I got this error:



So this is really weird as I have installed SQL Server from that ISO before (at least I think I have).

So in the end I downloaded a new fresh copy of SQL Server, ran it from the network server and it worked! So this error for me was a red hearing.

Tuesday, April 03, 2012

Windows Azure Traffic Manager to handle your public cloud DR strategy

I did a talk last month at the Windows Azure User Group in London. To be honest I had too much content to get through and part of my talk was to talk about drastic recovery otherwise known as disaster recovery (DR). I ran out of time and didn't get a chance to talk about it unfortunately.

What is Drastic (Disaster) Recovery
Some people say DR is not possible or difficult to implement in a Cloud Platform such as the PaaS model. This is in contrast to Cloud Platform IaaS as you have more control over hardware/configuration in a IaaS environment than you do in PaaS. But hopefully after you have read this article, you'll soon realise it is very easy now in Windows Azure.

For people not aware of DR, I'll explain it using a picture that illustrates the problem when using Windows Azure, consider figure 1 below:
Figure 1: Basic drastic recovery fail over configuration
In the above diagram we have some roles running in a Windows Azure data centre in both Europe and North Central US and a consumer that is consuming from 1 data centre, Europe.

The configuration depicted above is known as a failover or active/passive configuration which is very commonly found in drastic recovery configurations in the enterprise today. If the above were a private data centre whether this is private cloud or a traditional data centre, it would look almost identical.

When you deploy an application in an Azure data centre, you can't spread it across multiple data centres for resiliance or to implement DR. Well you can but you'll get a different URL for each data centre you deploy your application into. For example for the Europe data centre above, our full URL could be: http://myamazingapp-europe.cloudapp.net/

Then for the North Central US data centre, the URL has to be something different like: http://myamazingapp-us.cloudapp.net/

Because we have two different URL's this leads us to a problem because if the Europe data centre goes down, which in the above case is our active configuration, the users will not be able to use the application which affects availability unless the active configuration becomes the DR environment which is the USA data centre.

This means the Europe data centre is currently serving up the users requests all the time and the USA data centre is in passive/sleep state - in other words not being used unless of a failure in the Europe data centre. When/if this failure occurs, we need to switch from Europe to the USA data centre. This is not easy with the current configuration because the users/actors will have to switch URLs from Europe to the USA data centre - this is far from ideal as often the user/actor probably wouldn't know when to try the other URL, it really needs to be seamless.

So we really want to use a single URL that has the ability to reference both data centres when we need to without the users actually knowing.

A simple way to implement DR in Windows Azure
This is where DNS (Domain Name Service) play a very important role in hardware infrastructure and helps us solve this problem relatively easilly.

Now consider an amendment to the above diagram (figure 1) to now abstract the user from the *.cloudapp.net domain name using a internet DNS registrar and a CNAME record that resolves to the Azure data centre required. Remember, each cloudapp sub domain represents a single data centre region. When you design for a drastic recovery solution, you wouldn't normally use the same data centre as it kind of defeats the purpose of having a DR strategy.

Figure 2: Basic drastic recovery fail over configuration with DNS
With the amendment above, we can give url: http://myamazingapp.com to end users/actors. They are now completely unaware of where their application is being served up - which is how it should be.

Of course they could run a trace (TRACERT) on http://myamazingapp.com and see where it resolves to. In fact, I have made the above configuration on a application I have deployed in Azure right now. If I run TRACERT on my sub domain: http://remotemedia.simonrhart.com I get the following:

Figure 3: Running TRACERT on my sample app hosted in Azure
You can see from the above trace, my subdomain resolves to Microsoft's data centre DNS http://remotemedia.cloudapp.net IP address: 94.245.89.251.

We know that IP address is a real Azure data centre as it is registered to Microsoft. Here is the result from running a whois on the resolved IP address:
WHOIS information for 94.245.89.251:

  
[Querying whois.arin.net]
[Redirected to whois.ripe.net:43]
[Querying whois.ripe.net]
[whois.ripe.net]
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See http://www.ripe.net/db/support/db-terms-conditions.pdf
% Note: this output has been filtered.
%       To receive output for a database update, use the "-B" flag.
% Information related to '94.245.64.0 - 94.245.127.255'
inetnum:         94.245.64.0 - 94.245.127.255
descr:           Microsoft Limited
org:             ORG-MA42-RIPE
netname:         UK-MICROSOFT-20081107
country:         GB
admin-c:         AS9763-RIPE
tech-c:          EN603-RIPE
tech-c:          BR329-ARIN
status:          ALLOCATED PA
mnt-by:          RIPE-NCC-HM-MNT
mnt-lower:       MICROSOFT-MAINT
mnt-domains:     MICROSOFT-MAINT
mnt-routes:      MICROSOFT-MAINT
source:          RIPE # Filtered
organisation:   ORG-MA42-RIPE
org-name:       Microsoft Limited
org-type:       LIR
address:        Microsoft
                Darren Norman
                One Microsoft Way
                WA 98052 Redmond
                UNITED STATES
phone:          +1 (425) 703 6647
fax-no:         +1 425 936 7329
e-mail:         danorm@microsoft.com
admin-c:        NORM1-RIPE
admin-c:        NORM1-RIPE
admin-c:        NORM1-RIPE
mnt-ref:        MICROSOFT-MAINT
mnt-ref:        RIPE-NCC-HM-MNT
mnt-by:         RIPE-NCC-HM-MNT
source:         RIPE # Filtered
person:         Allie Settlemyre
address:        Microsoft Limited
address:        One Microsoft Way,
address:        Redmond, WA 98052
address:        USA
phone:          +1 (425) 705 0516
phone:          +1 (425) 936 7329
e-mail:         iprrms@microsoft.com
nic-hdl:        AS9763-RIPE
source:         RIPE # Filtered
person:         Bharat Ranjan
address:        Microsoft Corporation
address:        Redmond, WA, 98102
address:        One Microsoft Way
address:        USA
phone:          +1 (425) 706 3230
fax-no:         +1 (425) 936 7329
nic-hdl:        BR329-ARIN
source:         RIPE # Filtered
e-mail:         bharatr@microsoft.com
person:         Edet Nkposong
address:        Microsoft, One Microsoft Way,Redmond, WA 98052
address:        USA
e-mail:         edetn@microsoft.com
phone:          +14257071045
nic-hdl:        EN603-RIPE
mnt-by:         MICROSOFT-MAINT
source:         RIPE # Filtered

So that is wonderful isn't it? DR and failover problem sorted. Well kindof. It's not perfect as it's very manual. If the European data centre where my application is deployed goes down, I need to know about it so I can tell my DNS registrar to change the CNAME record to point to the application that is deployed in the DR data centre - North Central US.

This means I will have to log into my DNS registrar and change the CNAME when a failure occurs like so:
Figure 4: Setting up a CNAME record
I don't really want my IT admins having to deal with this as it's expensive and adds complexity. I could automate it but then I'd have to put a load of process in place and write some custom code not to mention I'll need some infrastructure running on-premise (most probably).

Surely there is a better way?

Windows Azure Traffic Manager
Although what I have talked about above will work, it's fairly simple and I have done this for some time. But thankfully there is a better way. Microsoft has made available in Community Technical Preview (CTP) a feature called Windows Azure Traffic Manager.

Unlike the way the beta programmes work in Azure, you can start using the Traffic Manager right away. There is no request to make in order to start using it - as per the beta programme

Windows Azure Traffic Manager can solve you're failover DR strategy without having to touch any DNS server/registrar once it's setup and more. It supports the following:
  1. Performance – traffic is forwarded to the closest hosted service in terms of network latency
  2. Round Robin – traffic is distributed equally across all hosted services
  3. Failover – traffic is sent to a primary service and, if this service goes offline, to the next available service in a list
As we are talking about failover, the feature we need from Traffic Manager is number 3: Failover.
So Traffic Manager will solve our problem of having to manually update the DNS registrar with the new Azure data centre DNS cloudapp domain name. Great, how do I do it?

Enabling Traffic Manager
To start using Traffic Manager you need to use the Windows Azure Management Portal to create a policy.

To do this navigate to the Windows Azure Management Portal and sign-in: http://windows.azure.com. Then click Virtual Network > Get Started With Traffic Manager.


See figure 5 below:

Figure 5: Getting started with Windows Azure Traffic Manager
Notice how this is different from using the beta programmes in Windows Azure. With Traffic Manager you can start using it straight away and right now there is no cost to using it.

Once you click the Get Started with Traffic Manager button, you'll see a dialog box similar to the following popup:

Figure 6: Creating a Traffic Manager Policy

Notice, there is a lab that you can do that covers all this setup of Traffic Manager here: http://msdn.microsoft.com/en-us/gg197529 but I have included here for the bigger picture of what specifically Traffic Manager is designed to solve and how you would solve these problems without it.

I have filled in the policy above as per the original high-level architecture diagram in figure 1 above. Note: DNS names are different from my diagram but the concept and design is the same.

In the above, as mentioned we select Failover as a load-balancing method. We specify Europe (remotemedia) as our primary active configuration and the North Central US (remotemedia-dr) namespace as the failover data centre. This one is our passive configuration, the application is there, deployed and waiting to be used should a failure occur.

Some data here is important, one piece is the DNS time to live (TTL). This is the maximum time users will have to wait until the DNS server gets updated with the new URL should a failure occur. The default is 5 minutes (300 seconds). The other important peice of information is the Traffic Manager DNS Prefix field.

Well, the Traffic Manager DNS Prefix field can be anything we want (so long as it hasn't been used already) as the users will never see it. Later we will reconfigure our DNS registrar to point to this DNS address.

Once I click OK, the policy is then created and it is active in Traffic Manager:

Figure 7: Our policy in traffic manager
Figure 7 above shows how these policies look in Traffic Manager. There is 1 thing left to do though, and that is to configure our DNS registrar to point our custom DNS to our specified traffic manager policy URL we chose.

Figure 8: That's it, our DNS configured and never needs to change again!
Figure 8 above shows our final DNS configuration. So what happened here?

We are simply handing the problem of failover over to Windows Azure. So in the above case, Azure will handle changing the DNS CNAME configuration should a failure occur.

Making Sure Traffic Manager is Working
What we now need to do is test that the Traffic Manager failover feature is working correctly.

If we now run a trace route on our new traffic manager URL it should resolve to the Europe data centre (in my case http://remotemedia.cloudapp.net) - remember I have two data centres 1 in Europe (active) and 1 in North Central US (passive):

Figure 9: Tracing traffic manager configuration
So I'm happy with that, Traffic Manager's DNS configuration looks correct to me.

Now I want to force a failure so I can test the failover. This is easy, all I need to do is shutdown the Europe data centre services like so:

Figure 10: Shutting down active node in Windows Azure
Now that my Europe data centre services are not running as per figure 10 above, I'll need to wait the 5 minutes (which is what I configured) before I test the failover.

Once 5 minutes has elapsed, I'll run the same trace route command via a command-prompt like so:

Figure 11: Tracing now that Europe services are down
I think this is a success, notice the trace now resolves to our North Central US data centre (my URL: http://remotemedia-dr.cloudapp.net)

Also, if I run the trace one layer out from my custom domain: http://remotemedia.simonrhart.com, I get the expected failover data centre as above [remotemedia-dr.cloudapp.net]:

Figure 12: Running a trace route from my custom domain
So now you can see how the actual Traffic Manager DNS that you pick can be anything you want, it doesn't really matter what it is.

How does all this look, consider the new amended high-level architecture diagram in figure 13 below:

Figure 13: Complete high-level architecture diagram using Traffic Manager for DR

Conclusion
So I think the Windows Azure Traffic Manager is a good solution at solving your Windows Azure failover needs. Checkout the Traffic Manager training lab for a hands-on exercise on how to use it in more detail.

In this article, I have also used a public DNS registrar, but if your users are within a corporate LAN but you want to make use of a public cloud platform like Windows Azure, the same concepts apply to an internal DNS server farm.

In this blog post, I wanted to show how DR can be done in a PaaS model like Windows Azure - hopefully you can see how easy it is with Windows Azure Traffic Manager.