Warning: session_start() [function.session-start]: open(C:\Windows\SERVIC~2\NETWOR~1\AppData\Local\Temp\\sess_2d102069a3e9eb5174c6b9aa2d551e72, O_RDWR) failed: Permission denied (13) in E:\hshome\snagy\azure.snagy.name\blog\wp-content\plugins\si-captcha-for-wordpress\si-captcha.php on line 763

Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at E:\hshome\snagy\azure.snagy.name\blog\wp-content\plugins\si-captcha-for-wordpress\si-captcha.php:763) in E:\hshome\snagy\azure.snagy.name\blog\wp-content\plugins\si-captcha-for-wordpress\si-captcha.php on line 763

Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at E:\hshome\snagy\azure.snagy.name\blog\wp-content\plugins\si-captcha-for-wordpress\si-captcha.php:763) in E:\hshome\snagy\azure.snagy.name\blog\wp-content\plugins\si-captcha-for-wordpress\si-captcha.php on line 763
Above The Cloud

 05 Jun 2011 @ 10:11 PM 

Steve Marx did a talk at the MVP Summit at the beginning of the year about things you can do with Windows Azure, and the talk then featured at Mix as well. I followed his lead and delivered a similar talk at Remix Australia which was titled ‘10 Things you didn’t know about Windows Azure’.

Since my blogging has been somewhat lax of late, I wanted to take this opportunity to quickly cover off those things I spoke about, in case you also were unaware of some of the things you can do.

Combine Web and Worker Roles

I talked about web and worker roles quite some time ago. The key premise is the same; web roles are great for hosting websites and services behind load balancers, while worker roles are like windows services and good for background processing. But perhaps you were unaware that you can combine worker role functionality into your web role?

Doing so is really quite easy. Both worker roles and web roles can have a class added to the project that inherits from ‘RoleEntryPoint’ (class remarks indicate that this is mandatory for worker roles). Inside this class is a method that can be overridden called ‘Run’. When you do so, you can provide code similar to the worker role code that starts an infinite loop and performs some ‘work’. By overriding this method in your web role you can achieve worker role behaviour.

Warning: The default implementation in RoleEntryPoint.Run is to sleep indefinitely and never return; returning from the Run method will result in the instance being recycled. Therefore, if you override the Run method, make sure that if you finish doing work, you still sleep indefinitely as well (perhaps just call Base.Run).


Extra Small Instance for VM Sizes

Previously the smallest instance size was a ‘small’ which comprised of a single core at 1.6GHz, 1.75Gb memory, and about 100Mbps to the virtual network card, all for around 13 cents an hour. From there, the other VM sizes increase uniformly; medium is 2 cores at the same speed, twice as much RAM and speed, and of course price; large is double medium, and extra large is double large.

One of the great thing about Azure is that the cores are dedicated, so how can you go smaller than a single core without sharing? Well the extra small instance is in fact a slower core at 1.0GHz and half as much memory. The network speed is the biggest drop, at only 5Mbps, but the cost is also quite low, coming in at around 5 cents per hour.

Host a Website Purely in Windows Azure Storage

Ok so the website would have to be reasonably static.. your HTML, CSS, and JavaScript files can all be stored in blob storage and even delivered by the Windows Azure CDN. But you could take it a step further and store a Silverlight XAP file in blob storage, and perhaps it could even talk to table storage to pull lists of data. Keep in mind though that you want this to be read-only; don’t store your storage key in the Silverlight XAP file because anyone could disassemble it and get your key (remember Silverlight runs on the client, not server).

Note also that blob storage does not support a notion of ‘default’ files to execute in folders. This means that even if you do have a static site hosted all in blob storage, the client would need to specify the exact URL. For example, this link works, but this link does not.

Lease Blobs

The REST API around blobs allows so much more than the current SDK exposes strongly typed classes for. Blob leases are one such example, where you can take a lease on a blob to prevent another process from modifying it. Leases will return a lease ID which can be used in subsequent operations. Leases expire after a certain time out, or can otherwise be released. Steve Marx describes the code required in this post.

Multiple Websites in a Web Role

Previously one website meant one web role, but now you can change the service definition so that it includes another directory as part of your web role. That directory can be configured to a different binding and this means deploying to the underlying role IIS as a second website. You can differentiate these websites either by using different port bindings (eg. 80 for one and 8080 for another site) or you can use a host header indicating your custom domain name should direct to the second site.

Role Startup Tasks

It is now possible to execute a task at start-up of your instance. Most of the work we need to do to configure our machines requires higher privileges than what we would like to run our apps in, so it’s not acceptable to simply put code in our Application_Start events. With start-up tasks we can run a batch file elevated before starting the main application in a more secure mode.

We achieve this by creating a batch file with our instructions, and include that as part of the package. We then specify the batch file to run at start-up in our service definition file. It could look a little similar to this:



Run Cool Stuff

The great thing about start-up tasks is that they not only enable the dependencies of our application, but also let us run other cool stuff, including other web servers! Here’s a (brief) list to get you thinking:

Ok maybe I lost you on that last one…

Securely Connect On-Premise Servers to Roles

Pegged as the part of the Windows Azure Virtual Network offering, Windows Azure Connect is a service that allows you to connect any machine to a ‘role’, which means all of the instances that make up that role. It uses IPSec to ensure point-to-point security between your server and the role instances running in the cloud. The role instances can effectively see your server as if it was on the local network, and address it directly. This opens up a number of hybrid cloud scenarios, here’s just a sampler:

  • Keeping that Oracle database backend located on premise
  • Using your own SMTP server without exposing it to the internet
  • Joining your Azure role machines to your private domain
    • … and using Windows Authentication off of that domain to login to your Azure apps


Manage Traffic Between Datacentres

The other part of the Windows Azure Virtual Network offering is the Windows Azure Traffic Manager. This service allows us to do one of three things:

  • Fail over from one Azure datacentre to another
  • Direct user requests for our site to the nearest datacentre
  • Round robin requests between datacentres

This service requires you to deploy your application more than once. For example, you might deploy your site to the Singapore datacentre, deploy it to the same datacentre a second time, and then deploy it to the South Central US datacentre. You could then configure the traffic manager to failover in that order, such that if your first deployment in Singapore died, it would start diverting traffic to the second deployment in Singapore, and if that also died (for example the whole datacentre was overrun in the zombie apocalypse) then your traffic would then be redirected to South Central US.

Traffic Manager doesn’t actually direct traffic like a switch; it is only responsible for name resolution. When a failover occurs, it starts handing out resolutions pointing to the second location. In the interim some browsers will still have entries indicating the first location’s IP address; those will continue to fail until the TTL (time-to-live) on the DNS resolution has expired before it makes a new request to the Traffic Manager.

CNAME Everything

You can now put your own CNAME on pretty much everything in Windows Azure. This allows you to make sites look like your own. For example, this site demonstrates a site running from blob storage with its own domain name, and if you view source you will see that the images are also being served from a custom domain which wraps up the CDN. Similarly you can also wrap up your web roles in your own domain, and when running multiple websites from a single web role you can use a domain name as the host header to identify which website to target. Finally the endpoint for the new Windows Azure Traffic Manager can also be aliased as by your own domain.


There you have it. My slides from Remix were essentially just the 10 points, so rather than upload them somewhere to be downloaded, I thought I’d just share the main points above instead.

If you want to learn more things you might not know about Windows Azure, check out Steve Marx’ presentation from Mix 11 or check out his ‘things’ site which will also give you some insight:


Tags Tags: , , , , ,
Categories: Azure
Posted By: Steven Nagy
Last Edit: 06 Jun 2011 @ 07 01 AM

E-mailPermalinkComments (6)

 02 Dec 2010 @ 11:56 AM 

This weekend I’m flying over to Perth to help out the local start up talent with their Windows Azure applications. It is a weekend long event where start ups in the Microsoft BizSpark program can come along and attempt to build an application in a weekend, then pitch it to a panel of judges. The best applications (or proof of concepts) will win some cool prizes such as a new WP7 device.

Here is the details for the actual event:


Are you a start up? Even if you’re not on the BizSpark program yet, get in touch with Catherine Eibner (@ceibner) and I’m sure she’ll let you come along if you turn up with your paper work.

Even if you don’t have an application idea for the platform, come along for the free training and technical support to learn what it takes to build highly scalable applications on the Windows Azure platform.

This is just one event in a long line of initiatives from Microsoft to support new and growing businesses and one I’m very proud to be part of. I hope to see you there!

Tags Tags: , ,
Categories: Azure
Posted By: Steven Nagy
Last Edit: 03 Dec 2010 @ 05 56 PM

E-mailPermalinkComments (1)

 22 Oct 2010 @ 8:26 PM 

clip_image001On 1st October Microsoft awarded 26 candidates in a new MVP category: Windows Azure. The Microsoft “Most Valuable Professional” award is an acknowledgement and thankyou to individuals for contributions to the technology community. You will typically find MVPs answering questions on forums, speaking at your local user groups and conferences, blogging, tweeting, and generally trying to help with understanding and adoption of particular technologies. They give up countless hours of their personal time doing this so they really do deserve a pat on the back.

This is the first time the Windows Azure category has been awarded which makes this particular round even more special. So congratulations to all the awardees and thanks from the community for the hard work you’ve done there.

You can find an MVP in any technology by simply accessing the MVP website:


From here you can find an MVP via the links on the left. Here’s the list of Windows Azure MVPs:


I’ve gathered some information on a few of them so you can more easily find their blogs and twitter accounts. Feel free to say hello, they really are a very friendly bunch!

Name Twitter Blog Country
Niraj Bhatt nirajrules http://nirajrules.wordpress.com/  
Andrew Wilson awilsong http://pinvoke.wordpress.com USA
Brent Stineman brentcodemonkey http://brentdacodemonkey.wordpress.com/  
Jim Zimmerman jimzim    
David Makogon dmakogon http://www.DavidMakogon.com  
Nico Ploner   http://nicoploner.blogspot.com Germany
Viktor Shatokhin way2cloud http://way2cloud.ru Ukraine
Panagiotis Kefalidis pkefal http://www.kefalidis.me Greece
Rainer Stropek rstropek http://www.timecockpit.com Austria
David Pallmann davidpallmann http://davidpallmann.blogspot.com USA
Cory Fowler syntaxc4 http://blog.syntaxc4.net USA
Sergejus Barinovas sergejusb http://sergejus.blogas.lt Lithuania
Gisela Torres 0GiS0 http://www.returngis.net Spain
Steven Nagy snagy http://snagy.name Australia
Jason Milgram   http://linxter.com/blog USA
Michael Wood mikewo http://mvwood.com  
Michael Collier michaelcollier http://www.michaelscollier.com  
Tags Categories: Azure Posted By: Steven Nagy
Last Edit: 11 Jan 2011 @ 08 01 PM

E-mailPermalinkComments (18)

 09 Sep 2010 @ 9:37 PM 

A couple of weeks ago I had the honor of kicking off the cloud track at TechEd 2010 Australia, where I delivered a level 300 “Lap Around the Windows Azure Platform”.

For those looking for my slides or the Service Bus demo I did, you can download them here (as zip files):

As has been my ‘thing’ of late, I took some photos of the audience prior to the start of the talk. I take no responsibility for any rude signs or funny faces.

Can you spot the 3 Readify guys?

Tags Tags: , ,
Categories: Azure
Posted By: Steven Nagy
Last Edit: 09 Sep 2010 @ 09 39 PM

E-mailPermalinkComments (0)

 25 Jul 2010 @ 5:33 PM 

Is this the end of the road for this blog?

Windows Azure:


Sql Azure:




To summarise: I’m read-only, disabled, and inactive.

I’m now officially taking votes for the next technology I should start evangelising. Perhaps I’ll become a functional programmer, or perhaps write DSLs instead. Or maybe switch to Ruby, I’ve heard you become rich over night if you write a cool Rails app.

However word on the street is that Windows Azure is now productised and has been handed to hosting providers to ‘trial’. Well I own 7 (working) machines at my house, surely I can build my own platform as a service?

The alternative is to start advertising on this blog and generate some revenue so that I can continue to explore the Windows Azure Platform via my wallet. Could you, the reader, handle in your face advertisements?

I’m open to suggestions.

Tags Categories: Azure Posted By: Steven Nagy
Last Edit: 25 Jul 2010 @ 05 33 PM

E-mailPermalinkComments (8)

 24 Jul 2010 @ 3:03 PM 

Overview Of Tables

There are three kinds of storage in Windows Azure: Tables, Blobs, and Queues. Blobs are binary large objects and Queues are robust enterprise level communication queues. Tables are non-relational entity storage mechanisms. All storage is three times redundant and available via REST using the ATOM format.

Tables can store multiple entities with different kinds of shapes. That is to say, you can safely store 2 objects in a table that look completely different. For example, a Product entity might have a name and a category, whereas a User entity might have name, date of birth, and login name properties. Despite the difference between these two objects, they can both be stored in the same table in Windows Azure Table Storage.

Some of the reasons we prefer to use Table storage over other database mechanisms (such as Sql Azure) is that it is optimised for performance and scalability. It achieves this through an innate partitioning mechanism based on an extra property assigned to the object, called ‘Partition Key’. Five objects each with different partition keys that are otherwise identical, will be stored on five different storage nodes.

There are a number of ways we can get and put entities into our table storage, and this article will address a few. However before we investigate some scenarios, we need to setup our table and the entities that will go into it.

Setting Up The Table

Before we view the ways we can interact with entities in a table, we must first setup the table. We are going to create a table called ‘Products’ for the purposes of this article. Here is some code that demonstrates how (this could go in Session or Application start events or anywhere you see fit).

var account = CloudStorageAccount.FromConfigurationSetting(“ProductStorage”);
var tableClient = account.CreateCloudTableClient();

The above code assumes you have a connection string configured already called ‘ProductStorage’ which points to your Windows Azure Storage account (local development storage works just as well for testing purposes).

Setting Up The Entity

For the purposes of this article we are going to put an entity called ‘Product’ into the table. That entity can be a simple POCO (plain old CLR object); only the publicly accessible properties will be persisted and retrievable however. Lets define a simple product entity with a name, category and a price. However all objects stored in tables must also have a row key, partition key, and a time stamp, otherwise we will get errors when we try to persist the item. Here’s our product class:

public class Product
   // Required
   public DateTime Timestamp { get; set; }
   public string PartitionKey { get; set; }
   public string RowKey { get; set; }
   // Optional
   public string Name { get; set; }
   public string Category { get; set; }
   public double Price { get; set; }

Pretty simple eh? However we can clean up some of the code; because the Timestamp, PartitionKey, and RowKey are all required for every single table entity, we could pull those properties out into a base entity class. However we don’t have to; there already exists one in the StorageClient namespace called ‘TableServiceEntity’. It has the following definition:

[DataServiceKey(new string[] {"PartitionKey", "RowKey"})]
public abstract class TableServiceEntity
   protected TableServiceEntity(string partitionKey, string rowKey);
   protected TableServiceEntity();
   public DateTime Timestamp { get; set; }
   public virtual string PartitionKey { get; set; }
   public virtual string RowKey { get; set; }

It makes sense for us to inherit from this class instead. We’ll also follow the convention of having a partition key and row key injected in the constructor on our Product class, while also leaving a parameterless constructor for serialisation reasons:

public class Product : TableServiceEntity
   public Product() { }
   public Product(string partitionKey, string rowKey)
       : base(partitionKey, rowKey) {}
   public string Name { get; set; }
   public string Category { get; set; }
   public double Price { get; set; }

Done. We’ll use this Product entity from now on. All scenarios below will use a test Product with the following information:

var testProduct = new Product("PK", "1")
    Name = "Idiots Guide to Azure",
    Category = "Book",
    Price = 24.99

Scenario 1: Weakly Typed Table Service Context

The easiest way to get started with basic CRUD methods for our entity is by using a specialised ‘Data Service Context’. The Data Service Context is a special class belonging to the WCF Data Services client namespace (System.Data.Services.Client) and relates to a specific technology for exposing and consuming entities in a RESTful fashion. Read more about WCF Data Services here.

In a nutshell, a Data Service Context lets us consume a REST based entity (or list of entities) and that  logic is given to us for free in the ‘DataServiceContext’ class, which can be found in the afore mentioned System.Data.Services.Client namespace (you’ll probably need to add a reference). Consuming RESTful services is not an Azure specific thing, which is why we need to import this new namespace.

Because table storage entities act exactly like other RESTful services, we can use a data services context to interact with our entity. Tables and their entities have a few additional bits surrounding them (such as credential information like the 256bit key needed to access the table storage) so we need to be able to include this information with our data context. The Azure SDK makes this easy by providing a class derived from DataServiceContext called ‘TableServiceContext’. You’ll notice that to instantiate one of these we need to pass it a base address (our storage account) and some credentials.

If you review some of the original code above, you’ll notice we created a CloudTableClient based on connection string information in our configuration file. That same table client instance has the ability to create our TableServiceContext, using the code below:

var context = tableClient.GetDataServiceContext();

That’s it! All the explanation above just for one line of code eh? Well hopefully you understand what’s happening when we get that context. It is generating a TableServiceContext which inherits from DataServiceContext which contains all the smarts for communicating to our storage table. Simple.

Now we can call all sorts of methods to create/delete/update our products. We’ll use the ‘testProduct’ defined earlier:

context.AddObject("Products", testProduct);

testProduct.Price = 21.99;

var query = context.CreateQuery<Product>("Products");
query.AddQueryOption("Rowkey", "1");
var result = query.Execute().FirstOrDefault();


The methods being called here only know about ‘object’, not ‘Product’ and are therefore not type safe. We’ll look at a more type safe example in the next scenario.

Scenario 2: Strongly Typed Table Service Context

In the previous example we saw that the Table Service Context was a generic way to get going with table entities quickly. This works well because we can put any type of entity into the table via the same ‘AddObject’ method. However sometimes in code we like to be more type safe than that and want to enforce that a particular table only accepts certain objects. Or perhaps we want unique data access classes for our different entity types so that we can put some validation in.

Either way, this is relatively easy to achieve by creating our own Data Service Context class. We still need to wrap up table storage credentials, so its actually easier if we inherit from TableServiceContext, as follows:

public class ProductDataContext : TableServiceContext
    public ProductDataContext(string baseAddress,
                              StorageCredentials credentials)
        : base(baseAddress, credentials)
    { }
    // TODO

The base constructor of TableServiceContext requires us to supply a base address and credentials, so we simply pass on this requirement. Our constructor doesn’t need to do anything else though.

The next step is to start adding methods to this new class that perform the CRUD operations we require. Let’s start with a simple query:

public IQueryable<Product> Products
   get { return CreateQuery<Product>("Products"); }

This will give us a ‘Products’ property on our ProductDataContext that will allow us to query against the product set using LINQ. We’ll see an example of that in a minute. For now, we’ll add in some strongly typed wrappers for the other CRUD behaviours:

public void Add(Product product)
   AddObject("Products", product);

public void Delete(Product product)

public void Update(Product product)

Nothing very special there, but at least we can enforce a particular type now. Let’s see how this might work in code to make calls to our new data context. As before we’ll assume the table client has already been created from configuration (see ‘Setting Up The Table’ above) and we’ll use the same test product as before:

var context = new ProductDataContext(


testProduct.Price = 21.99;

var result = context.Products
   .Where(x => x.RowKey == "1")


You can see the key differences from the weakly typed scenario mentioned earlier. We now use the new ProductDataContext, however we can’t automatically create it like we can with the generic table context, so we need to instantiate it ourselves, passing the base URI and credentials from the table client. We also use our more explicitly typed methods for our CRUD operations, however you might notice there is a big change in the way we query data. The ‘Products’ property returns IQueryable<Product> which means we can use LINQ to query the table store. Careful though, not all operations are supported by the LINQ provider. For example this will fail:

var result = context.Products
   .FirstOrDefault(x => x.RowKey == "1");

.. because FirstOrDefault is not supported with predicates. However this new query API is much nicer and allows us to do a lot more than we could when the base entity type was unknown by the data context.

Scenario 3: Using The Repository and Specification Patterns

Before reading on you might want to familiarise yourself with the concepts of these patterns. To prevent blog duplication, please refer to this article that someone smarter than me wrote:

Implementing Repository and Specification patterns using Linq.

The goal is to create a repository class that can take a generic type parameter which is an entity we want to work with. Such a repository class will be reusable for all types of entities but still be strongly typed. We also want to have it abstracted via an interface so that we are never concerned with the concrete implementation. For more information on why this is good practise, please refer to the SOLID principles.

We also want to use the specification pattern to provide filter/search information to our repository. We want to leverage the goodness of LINQ but also explicitly define those filters as specifications so that they are easily identifiable.

I usually find it easiest to start with the interface and worry about the implementation later. Let’s define an interface for a repository that will take any kind of table entity:

public interface IRepository<T> where T : TableServiceEntity
   void Add(T item);
   void Delete(T item);
   void Update(T item);
   IEnumerable<T> Find(params Specification<T>[] specifications);
   void SubmitChanges();

Seems simple enough, however you might note that our find process is less flexible than in scenario 2 where we could just use LINQ directly against our data service. We want to provide the flexibility of LINQ yet still provide explicitness and reusability of those very same queries. We could add a bunch of methods for each query we want to do. For example, to retrieve a single product, we could create an extra method called ‘GetSingle(string rowkey)’. However that only applies to products, and may not apply to other entity types. Likewise, if we want to get all Products over $15, we can’t do that in our repository because it makes no sense to get all User entities that are over $15.

That’s where the specification pattern comes in. A specification is a piece of information about how to refine our search. Think of it as a search object, except it contains a LINQ expression. We’ll see with an example soon, but lets just define our specification class and adjust our Find method on our IRepository<T> interface first:

IEnumerable<T> Find(params Specification<T>[] specifications);
public abstract class Specification<T>
    public abstract Expression<Func<T, bool>> Predicate { get; }

Our Find method has been adjusted to Find entities that satisfy the specifications provided. And a specification is just a wrapper around a predicate. Oh, and a predicate is just a fancy word for a condition. For example, consider this code:

if (a < 3) a++;

The part that says “a < 3” is the predicate. We can effectively change that same code to the following:

Func<int, bool> predicate = someInt => someInt < 3;
if (predicate(a)) a++;

It might seem like code bloat in such a simple example, but the ability to reuse a ‘condition’ to check in many places will be a life saver when your systems start to grow. In our case we care about predicates because LINQ is full of them. For example, the “Where” statement takes a predicate in the form of Func<T, bool> (where T is the generic type on your IEnumerable). In fact, this is the exact reason we are also interested in predicates in our specification. Each specification represents some kind of filter. For example:

Products.Where(x => x.Rowkey == “1”)

The part that says x.Rowkey == “1” is a predicate, and can be made reusable as a specification. You’ll see it in action in the final code below, but for now we’ll move on to our Repository implementation. Just keep in mind that we will be reusing those ‘conditions’ and storing them in their own classes.

We’ll focus first on the definition of the repository class and its constructor:

public class TableRepository<T> : IRepository<T>
       where T : TableServiceEntity
   private readonly string _tableName;
   private readonly TableServiceContext _dataContext;
   public TableRepository(string tableName,
                          TableServiceContext dataContext)
      _tableName = tableName;
      _dataContext = dataContext;
   // TODO CRUD methods

Our table repository implements our interface and most importantly takes a TableServiceContext as one of its constructor parameters. And to complete the interface contract we must also ensure that all generic types used in this repository inherit from TableServiceEntity. Next we’ll add in the Add/Update/Delete methods since they are the easiest:

public void Add(T item)
   _dataContext.AddObject(_tableName, item);
public void Delete(T item)
public void Update(T item)

Simple enough, since we have the generic table service context at our disposal. Likewise we can add in the SubmitChanges() method:

public void SubmitChanges()

We could just call SaveChanges whenever we add or delete an item, but this makes it more difficult to do batch operations. For example we might want to add 5 products and then submit them all as one query to the table storage API. This method lets us submit whenever we like, which is keeping with the same approach used when creating your own TableServiceContext or using the default one.

Finally, we need to define our Find method which takes zero or more specifications:

public IEnumerable<T> Find(params Specification<T>[] specifications)
   IQueryable<T> query = _dataContext.CreateQuery<T>(_tableName);
   foreach (var spec in specifications)
      query = query.Where(spec.Predicate);
   return query.ToArray();

Every specification must have a predicate (refer to the initial definition and you will see the property is defined as ‘abstract’ which means it must be overridden). And a predicate is a Func<T, bool> and the T type is the same type as our repository. Therefore we can simply chain all the predicates together by calling the .Where() extension method on the query over and over for each specification. At the end of the day the code is really quite small.

And that’s all the framework-like code for setting up the Repository and Specification patterns against table storage. To show you how it works we first need a specification that allows us to get a product back based on its row key. Here’s an example:

public class ByRowKeySpecification : Specification<Product>
   private readonly string _rowkey;

   public ByRowKeySpecification(string rowkey)
      _rowkey = rowkey;

   public override Expression<Func<Product, bool>> Predicate
      get { return p => p.RowKey == _rowkey; }

In this specification, we take a row key in the constructor, and use that in the predicate that gets returned. The predicate simply says: “For any product, only return those products that have this row key”. We can use this specification along with our repository to perform CRUD operations as follows:

var context = tableClient.GetDataServiceContext();

IRepository<Product> productRepository =
   new TableRepository<Product>("Products", context);


testProduct.Price = 21.99;

var byRowkey = new ByRowKeySpecification("1");
var results = productRepository.Find(byRowkey);
var result = results.FirstOrDefault();


Tada! We now have a strongly typed repository that will work on any entity type you want to use. And the great thing about repositories is that because we have an IRepository abstraction we can implement an ‘in memory’ version of the repository which is very useful for unit testing.


As we progressed through the three options the amount of code got larger but I think we also got closer to true object oriented programming by the end there. Personally I like to always use repositories and specifications because it means we can write our code in a way that the persistence mechanism is irrelevant. We could easily decide to move products into our Sql Azure database and instead use a SqlRepository<T> instead of the TableRepository<T>.

Hopefully you’ll find the concept useful and aim to start with scenario 3 in all cases. To help you get started, I’ve assembled all 3 options into a reusable library for you, downloadable from here:

In each of the scenario folders you’ll find a single starter class that inherits from the TableStorageTest abstract class; you can look at that class to work out how the particular scenario works.

In the near future I will be looking to create a number of these basic classes as a reusable library to help Windows Azure developers get up and running faster with their applications. But in the mean time, happy coding.

Tags Tags: , , , , ,
Categories: Azure
Posted By: Steven Nagy
Last Edit: 24 Jul 2010 @ 03 40 PM

E-mailPermalinkComments (11)

 21 Jul 2010 @ 8:27 PM 

Last night I delivered another Windows Azure Platform talk, this time at the my local user group. I think this is one of the best sessions I’ve done so far. I think I had a good vibe going on with the audience. And I appreciate that so many of you stuck around for the second talk; I think we went for a good 2 hours at least! Plus another 2 hours at the pub afterwards…

Anyway, here’s the slide deck.

Tags Tags: ,
Categories: Azure
Posted By: Steven Nagy
Last Edit: 21 Jul 2010 @ 08 27 PM

E-mailPermalinkComments (0)

Earlier this year Eric Nelson from Microsoft put out the call to Azure authors to build a community eBook about the Windows Azure Platform. Today the book was finally released to web today, and I have 2 articles included: Auto-scaling Azure, and Building Highly Scalable Applications in the Cloud.

I won’t bang on about it here, just check it out for yourself:

Tags Tags: , ,
Categories: Azure
Posted By: Steven Nagy
Last Edit: 23 Jun 2010 @ 10 27 PM

E-mailPermalinkComments (1)

 13 Jun 2010 @ 11:02 AM 

Earlier last week I had the privilege of presenting at Remix10, a two day web conference held in Melbourne Australia. I presented in front of a packed crowd, and the topic seemed so popular that I had to deliver it again a second time the next day.



I really enjoyed the “love fest” and while I didn’t get to see much of the other presentations, the keynote was awesome, demonstrating some great integration points with Microsoft Surface, Slate, and Windows Phone 7.

My presentation topic was “Architecting for the Cloud” with a focus on how to build highly scalable applications, leveraging aspects of the Windows Azure Platform.

The presentation began with a quick overview of the platform, then followed by identifying some key aspects to highly scalable applications such as minimising state, caching mechanisms and messaging patterns.

As with my talk at the Windows Azure launch in the Philippines, I decided to take some photos of the crowd at the start. I got to speak to a lot of these individuals during the mixer drinks that evening and it sounds like the Azure platform is starting to gain momentum in Australia.



I really enjoyed the event and hope to be back next year!

Remix10 Slide Deck – Architecting For The Cloud – 1.75Mb

Tags Tags: , , ,
Categories: Azure
Posted By: Steven Nagy
Last Edit: 13 Jun 2010 @ 11 02 AM

E-mailPermalinkComments (2)

 03 Jun 2010 @ 8:49 PM 

On the 8th June 2010 Brisbane will have its first ever CloudCamp event. I’m very excited about this. I’ve had my head in the Azure space for so long I’ve somewhat neglected what everyone else is doing.

Register for CloudCamp Brisbane here.


Here’s the line direct from the website:

CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas. With the rapid change occurring in the industry, we need a place where we can meet to share our experiences, challenges and solutions. At CloudCamp, you are encouraged to share your thoughts in several open discussions, as we strive for the advancement of Cloud Computing. End users, IT professionals and vendors are all encouraged to participate.

And the tentative schedule:

  • Registration & Networking
  • Welcome and Thank yous
  • Lightning Talks (5 minutes each)
  • Unpanel
  • Begin Unconference (organize the unconference)
  • Unconference Session 1
  • Unconference Session 2
  • Wrap-up
  • Networking in conjunction with Zendesk meetup


The event will be held at Griffith University’s Nathan Campus. Three rooms have been allocated to us (for free thanks to the School of Computing and Information Technology)

Click here for a map.

As you can see from the map, the University is right in the middle of Toohey Forest.


Getting to and from the University is relatively easy. The two main options are driving and buses (there is no train station close by).

Parking will cost. Campus security is very liberal with handing out fines so be prepared to pay for the full time period.

Buses are available from the city or from Garden City shopping centre.


Start prepping your talk ideas and come along. I look forward to meeting anyone else with their head in the clouds. Tell your friends!

Tags Tags: ,
Categories: Cloud
Posted By: Steven Nagy
Last Edit: 03 Jun 2010 @ 08 49 PM

E-mailPermalinkComments (0)

\/ More Options ...
Change Theme...
  • Users » 25016
  • Posts/Pages » 64
  • Comments » 191
Change Theme...
  • VoidVoid
  • LifeLife
  • EarthEarth
  • WindWind « Default
  • WaterWater
  • FireFire
  • LiteLight
  • No Child Pages.

Warning: Unknown(): open(C:\Windows\SERVIC~2\NETWOR~1\AppData\Local\Temp\\sess_2d102069a3e9eb5174c6b9aa2d551e72, O_RDWR) failed: Permission denied (13) in Unknown on line 0

Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0