Using the Queuing Service in Windows Azure

Windows Azure is Microsoft’s cloud computing platform, and it is comprised of a series of services. The storage family of services is REST based, making it available to any developer on any platform. These services include

  • BLOB storage, for your files,
  • Tables for your structured, non-relational data, and
  • Queues to store messages that will be picked up later.

The Windows Azure Platform also offers SQL Azure for relational data. SQL Azure, while a way to store data in Windows Azure, is not technically part of Windows Azure Storage, it is its own product. SQL Azure is also not based on REST, but on TDS.

In this article we are going to focus on the easiest of these services to work with, the Queues. We will also look at when and how you might use Queues in your application.

What are Queues?

A queue is simply a list of messages. The messages flow from the bottom of the queue to the top of the queue in the order they were added to the queue. It is known by computer scientists as a FIFO data structure. FIFO stands for First In-First Out.

You can think of a queue like a line at the bank. As customers enter the bank, they enter the bottom of the queue (or the back of the line). As the single teller finishes with each customer, the line moves forward, and people eventually get to the head of the line, and get their turn with the teller.

Just like how a bank may have many sets of doors that a customer may arrive through, a queue may have several message producers adding messages to the queue. These producers may have nothing to do with each other, and in some instances may create messages with different content and purposes.

A bank may, when the line gets long enough, open up more teller windows, and you application can do the same. You can change how many consumers you have taking messages off of the queue and processing the data.

Queues in and of themselves are pretty simple beasts, and have been around for a long time as a technology. They are also relatively simple to work with, highly reliable and performant: a single queue in Windows Azure can handle 500 operations per second, including putting, getting, and removing messages.

Windows Azure uses a storage container to hold your data - . You can create a storage account as part of your Windows Azure subscription. Each subscription can have up to five storage accounts by default. The limit can be increased by calling tech support.

A storage container will have a name, for example, OrdersData, and a storage key. The storage key is a private key of a certificate which acts as your password into that storage account. If anyone has both the name and the key, they will have full permissions to your storage, so you want to protect these.

A single storage account can hold any combination of data from Blobs, Queues, and Tables, up to a total capacity of 100TB. Any data stored in a storage container is replicated three times to provide for high availability and reliability. Each Windows Azure subscription can have many different storage accounts.

Starting the Sample

We are going to create a sample comprised of two console applications. One will be the consumer, and put messages on the queue. These messages are meant to be commands for a robot. The other console application will play the role of the robot, the consumer.

To get started, you will need to install the Windows Azure Tools for Microsoft Visual Studio. The current version is 1.4, and that is the version we will be using. You can download it from

Once you have the SDK installed, start the storage emulator. You should find it as Start > All Programs > Windows Azure SDK v1.4 > Storage Emulator. You must run this as an Administrator. The emulator runs a simulation of the real Windows Azure storage services locally for development purposes.

Now open Visual Studio 2010, also in Admin mode. The Windows Azure SDK requires Admin mode because of how the Windows Azure emulator works behind the scenes.

  1. Create a new blank solution.
  2. Add a C# Console Application Project to it. We will name this first console project Producer because it will be our little application for producing messages and adding them to the queue.
  3. Add a reference to the Microsoft.WindowsAzure.StorageClient assembly to the new project. You’ll find it in %ProgramFiles%\Windows Azure SDK\v1.4\bin.
  4. Add a second reference to the System.configuration assembly.
  5. Add an app.config file to your solution.

When app.config appears in Visual Studio, add the following appSettings element. This tells the Storage Client knows where to connect. This is a lot like providing a connection string to a database. We are using a connection string that will connect to and use the local storage emulator instead of connecting to the real Queue service in the cloud.

<?xml version="1.0" encoding="utf-8" ?>



    <add key="DataConnectionString" value="UseDevelopmentStorage=true" />



Now open program.cs if it isn’t open already and add the following using statements

using Microsoft.WindowsAzure.StorageClient;

using Microsoft.WindowsAzure;

using System.Configuration;

using System.Threading;

Now we start writing our application in the Main method of Program.cs. To begin with, we need to tell the Windows Azure Storage Client not to look in the Windows Azure project for its configuration details. There isn’t a Windows Azure project in this solution as this code will run on our local PC instead of in the cloud, so we have to add a few lines of code that tell it to look in app.config for its configuration.

private static void Main(string[] args)
    CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>

The next step is to get a reference to our queue. In order to connect to the queue we need to first connect to the Storage Account, and then your Queue service. They live in a hierarchy. The queue is contained in the Queue service, which is contained inside your Storage Account in Windows Azure.

Once we have done that we will get a reference to the queue itself. The trick here is that you can get a reference to a queue, even when it doesn’t exist yet. This is how you create a queue. It seems weird, but you will get used to it. The FromConfigurationSetting() method will look in your cloud service configuration file for the DataConnectionString configuration value. Of course you can name the configuration element anything you would like.

    var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
    var queueClient = storageAccount.CreateCloudQueueClient();
    var queue = queueClient.GetQueueReference("robotcommands");

In our case, the queue named ‘robotcommands’ doesn’t exist yet.

It is important to note that all queue names must be lower case. You will forget this one day, and you will spend hours figuring out why your code isn’t working, and then you will remember me saying over and over again that the queue name must be lower case.

The CreateIfNotExist() method will see if the queue really does exist in Windows Azure, and if it doesn’t it will create it for you. This code will leave you with a queue object (of type CloudQueue) that will let you work with the queue you have selected or created.

What are Messages?

So now that we have a queue, what do we put in it? Well, messages of course. Messages in Windows Azure queues are meant to be very small, limited to 8KB in size. This is to help make sure the queue can stay super-fast, and make it easy for these messages to travel over the wire as part of REST.

Creating a message is fairly simple. You create a CloudQueueMessage with the contents of the message, and then add it to the queue object from above. You can put any text in the message that you want, including encoded binary data. In our sample, we are now going to create a message, and add it to the queue. We will use some user entered input as the contents of the message. We are using an infinite loop to continuously receive input from the user. If the user enters ‘exit’ then we will break the loop and end the program.

    string enteredCommand = string.Empty;
    Console.WriteLine("Welcome to the robot command queue system. Enter 'exit' to stop sending commands.");
    while (true)
        Console.Write("Enter a command to be queued up for the robot:");
        enteredCommand = Console.ReadLine();
        if (enteredCommand != "exit")
            queue.AddMessage(new CloudQueueMessage(enteredCommand));
            Console.WriteLine("Command sent.");

The important line here is the queue.AddMessage() line. In this line we create a new CloudQueueMessage passing in the data entered by the users. This creates the message we want to send. We then hand that message to the AddMessage method which sends it to the queue.

That’s all we need to do to create our producer application. We can now send messages, through a queue, to our robot.

Writing the Consumer Application

We now need to write the application that will represent our robot. It will continually check the queue for any messages that have been sent to it, and assumedly execute them somehow.

  1. In Visual Studio, click File > Add > New Project.
  2. Select Console Application, set its name to Consumer and hit OK.
  3. Add references to the Microsoft.WindowsAzure.StorageClient and System.configuration assemblies as you did for the Producer solution.
  4. Add an app.config file to the Consumer project and add the same appSettings element to this file as you did for the Producer solution.

Now open program.cs for the consumer solution if it isn’t already open. Initially, this application needs to do the same configuration and queue setup as the producer application, so our first additions replicate those made in the Starting the Sample section.

namespace Consumer


    using System;
    using System.Linq;
    using Microsoft.WindowsAzure.StorageClient;
    using Microsoft.WindowsAzure;
    using System.Configuration;
    using System.Threading;
    public static class Program
       private static void Main(string[] args)
            CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>

            var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
            var queueClient = storageAccount.CreateCloudQueueClient();
            var queue = queueClient.GetQueueReference("robotcommands");

Now to move on to the guts of our consumer application. The consumer of the queue will connect to the queue just like the message producer code. Once you have a reference to the queue you can call GetMessage(). A consumer will normally do this from within a polling loop that will never end. An example of this type of loop, without all of the error checking that you would normally include, is below.

In this while loop we will get the next message on the queue. If the queue is empty, the GetMessage() method will return a null. If we get a null then we want to sleep for some period of time. In this example we are sleeping for five seconds before we poll again. Sometimes you might sleep a shorter period of time (speeding up the poll loop and fetching messages more aggressively), and sometimes you might want to slow the poll loop down. We will look at how to do this later in this article.

The pattern you should follow in this loop is:

  1. Get Message
    • If no message available, sleep for five seconds
  2. Process the Message
  3. Delete the Message

The code that will do this is as follows. Add it to the Main() method after the call to queue.CreateIfNotExist().

            CloudQueueMessage newMessage = null;
            double secondsToDelay = 5;
            Console.WriteLine("Will start reading the command queue, and output them to the screen.");
            Console.WriteLine(string.Format("Polling queue every {0} second(s).", secondsToDelay.ToString()));
            while (true)
                newMessage = queue.GetMessage();
                if (newMessage != null)
                    Console.WriteLine(string.Format("Received Command: {0}", newMessage.AsString));


If there is a message found we will then want to process it. This is whatever work you have for that message to do. Messages generally follow what is called the Work Ticket pattern. This means that the message includes key data for the work to be done, but not the real data that is needed. This keeps the message light and easy to move around. In this case the message is just simple commands for the robot to process.

After the work is completed we want to remove the message from the queue so that it is not processed again. This is accomplished with the DeleteMessage() method. In order to do this we need to pass in the original message, because the service needs to know the message id and the pop receipt (more on this in The Message Lifecycle section) to perform the delete. And then the loop continues on with its polling and processing.

Running the Sample

You should now have a Visual Studio solution that has two projects in it. A console application called Producer that will generate robot commands and submit them to your queue. You will also have a second console application called Consumer that plays the role of the robot, consuming messages from the queue.

We need to run both of these console applications at the same time, which you can’t normally do with f5 in Visual Studio. The trick to running both is to right click on each project name, select the debug menu, and then select ‘start new instance’. It doesn’t matter which one you start first.

After you do this you will have two DOS windows open, one for each application. Use the Producer application to start creating messages to be sent to the queue. Here is what it looks like. Make sure the storage emulator from the Windows Azure SDK is already running before you start the applications.

CmdProducer and CmdConsumer

The Message Lifecycle

The prior section mentioned something called a pop receipt. The pop receipt is an important part of the lifecycle of a message in the queue. When a message is grabbed from the top of the queue it is not actually removed from the queue. This doesn’t happen until DeleteMessage is called later. The message stays on the queue but is marked invisible. Every time a message is retrieved from the queue, the consumer can determine how long this timeout of invisibility should be, based on their processing logic. This defaults to 30 seconds, but can be as long as two hours. The consumer is also given a unique pop receipt for that get operation. Once a message is marked as invisible, and the time out clock starts ticking, there isn’t a way to end it quicker. You must wait for the full timeout to expire.

When the consumer comes back, within the timeout window, with the proper receipt id, the message can then be deleted.

If the consumer does not try to delete the message within the timeout window, the message will become visible again, at the position it had in the queue to begin with. Perhaps during this window of time the server processing the message crashed, or something untoward happened. The queue remains reliable by marking the message as visible again so another consumer can pick the message up and have a chance to process it. In this way a message can never be lost, which is critical when using a queuing system. No one wants to lose the $50,000 order for pencils that just came in from your best customer.

This does lead us to one small problem. Let’s say our message was picked up by server A, but server A never returned to delete it, and the message timed out. The message then became visible again, and our second server, server B, finds the message, picks it up and processes it. When it picks up the message it receives a new pop receipt, making the pop receipt originally given to server A invalid.

During this time, we find out that server A didn’t actually crash, it just took longer to process the message than we predicted with the timeout window. It comes back after all of its hard work and tries to delete the message with its old pop receipt. Because the old pop receipt is invalid server A will receive an exception telling it that the message has been picked up by another processor.

This failure recovery process rarely happens, and it is there for your protection. But it can lead to a message being picked up more than once. Each message has a property, DequeueCount, that tells you how many times this message has been picked up for processing. In our example above, when server A first received the message, the dequeuecount would be 0. When server B picked up the message, after server A’s tardiness, the dequeuecount would be 1. In this way you can detect a poison message and route it to a repair and resubmit process. A poison message is a message that is somehow continually failing to be processed correctly. This is usually caused by some data in the contents that causes the processing code to fail. Since the processing fails, the messages timeout expires and it reappears on the queue. The repair and resubmit process is sometimes a queue that is managed by a human, or written out to Blob storage, or some other mechanism that allows the system to keep on processing messages without being stuck in an infinite loop on one message. You need to check for and set a threshold for this dequeuecount for yourself. For example:

if (newMessage.DequeueCount > 5)




Word of the Day: Idempotent

Since a message can actually be picked up more than once, we have to keep in mind that the queue service guarantees that a message will be delivered, AT LEAST ONCE.

This means you need to make sure that the ‘do work here’ code is idempotent in nature. Idempotent means that a process can be repeated without changing the outcome. For example, if the ATM was not idempotent when I deposited $10, and there was a failure leading to the processing of my deposit more than once, I would end up with more than ten dollars in my account. If the ATM was idempotent, then even if the deposit transaction is processed ten times, I still get only ten dollars deposited into my account.

You need to make sure that your processing code is idempotent. There are several ways to do this. Most usually you should just build it into the nature of the backend systems that are consuming the messages. In our robot example we wouldn’t want the robot to execute a single ‘Turn Left’ command twice because it is accidentally handling the same message twice. In this scenario we might track the message id of each message processed, and check that list before we execute a command to make sure we haven’t processed it.

When Queues are Useful

We can see that Windows Azure Queues are very simple to use. Queues become an important tool when we try to decouple parts of our system from each other. They provide an excellent way for two components (either in the same system, or in different systems altogether) to communicate (in a single-directional manner) without having any dependencies on each other.

These two sides of the communication (the producer and the consumer of the messages) don’t have to be running in Windows Azure. Perhaps the producer is a laptop application that is used by the field sales force to process and submit orders back to corporate. The consumer could be a mainframe behind the firewall at corporate that then reaches out and pulls down the messages in the queue to process them.

This is a great way to reduce the dependency from the sender on the receiver, giving you much more flexibility in your architecture, and reducing brittleness. If that mainframe is ever updated to a .NET application running on servers in the corporate datacentre, the producers of the message never need to know or care.

Other Queue Tips

We mentioned earlier that you may want to adjust how often you poll the queue. How often you poll the queue will mostly depend on how you need to consume the messages. In our mainframe example, we might be tied to a nightly batch process. In this case the mainframe is only connecting once an evening to pull down all of the orders that built up during the day. This is called a long queue, because you expect messages to stay in the queue for a longer period of time before they are processed.

Other queue polling techniques rely on self-adjusting the delay in the loop. A common algorithm for this is called the Truncated Exponential Back Off. This approach is taken from how TCP manages the to the sending and receiving of packets over the network. You can read more about the TCP scenario at

With this algorithm you will define a minimum polling delay (perhaps 1 second) and a maximum delay (perhaps 60 seconds). We will vary the delay of the polling loop over time. Each time the queue is found to be empty we will double the current delay. As the queue remains empty we will poll less and less often. First delaying the loop by 1 second per poll, then 2, then 4, 8, 16, 32, and so on until we reach our maximum delay of 60 seconds.

If we ever find a message in the queue, then we know that there is some traffic and we should speed up our polling loop. There are two approaches to take in this case. The first is that you start to gradually speed up the loop by cutting the delay in half each time you find a message. In this manner your delay would go from 60 to 30, to 15, and eventually back down to 1 second if there is enough messages in the queue. The alternative approach is to immediately shorten your polling delay to 1 second as soon as you find a message in the queue. This is useful when you know the message pattern involves groups of messages, instead of lone messages.


In this article we have explained how queues work, and how we can use them to decouple our systems, provide the robustness to our architecture we often need. Using messages is quite easy, with simple methods for putting and getting messages onto and off of the queue. There are many ways you can use a queue in your system, and we looked at only a few possibilities including a regular polling loop, a long queue used for infrequent use, and truncated exponential back off polling that allows our queue to speed up and slow down depending on usage.

You might also like...


About the author

Brian Prince United States

Architect Evangelist at Microsoft

Interested in writing for us? Find out more.


Why not write for us? Or you could submit an event or a user group in your area. Alternatively just tell us what you think!

Our tools

We've got automatic conversion tools to convert C# to VB.NET, VB.NET to C#. Also you can compress javascript and compress css and generate sql connection strings.

“I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone” - Bjarne Stroustrup