Service Bus Notification Hubs are a brand new intrinsic feature of Windows Azure Service Bus and are different from other push notification services in four key areas:

  • Complete client registration management. Your backend application (if you even have one) does not need to worry at all about device-ids or channels or other particulars of push notifications and doesn't need to cooperate in management. It doesn't even have to be a web app that's publicly accessible.  
  • Platform independence. Service Bus Notification Hubs allow cross-platform push notifications so that iOS Alerts and Windows Live Tiles can be targeted with a single event message. 
  • Broadcast and tag-based Multicast - Service Bus Notification Hubs are optimized around automatic notification broadcast to many thousand devices with low latency. One message in, thousands of notifications out.
  • Mass customization - Notification Hub notification templates allow for customization of notification delivery for each individual registration, allowing each instance of a client App to choose how it wants to receive events.

In this preview, Notification Hubs are able to push notifications to Windows Store apps and iOS apps from .NET back-ends. Support for Android and Windows Phone, along with additional back-end technologies (including Windows Azure Mobile Services) will be added soon.

After the basic intro, I'm showing how to create and provision a Windows 8 application from scratch, how to hook it up to a new Notification Hub, and send it a notification "Toast" using the portals and Visual Studio 2012. (The equivalent iOS walkthrough will follow later this week)

For those of you with a "TL;DW" attention span (too long; didn't watch), here's the whole extent of the code added to the stock Windows Store Grid template to enable Service Bus Notifications and that includes re-registering existing registrations at App startup. 5 lines without cosmetic wrapping and some massaging of XML for the template:

public App()
{
    var cn = ConnectionString.CreateUsingSharedAccessSecretWithListenAccess(
            "sb://clemensv1.servicebus.windows.net",
            "{{secret-key}}");
    this.notificationHub = new NotificationHub("myhub", cn);

    ...
}

async Task InitNotificationsAsync()
{
    await notificationHub.RefreshRegistrationsAsync();

    if (!await notificationHub.RegistrationExistsForApplicationAsync("myToast"))
    {
        await notificationHub.CreateTemplateRegistrationForApplicationAsync(
            CreateTemplate(), "myToast");
    }
}
        
XmlDocument CreateTemplate()
{
    var t = ToastNotificationManager.GetTemplateContent(ToastTemplateType.ToastText01);
    var n = t.SelectSingleNode("//text[@id='1']") as XmlElement;
    if (n != null)
    {
        n.InnerText = "$(msg)";
    }
    return t;
}

The event-source code is similarly terse:

var cn = ServiceBusConnectionStringBuilder.CreateUsingSharedAccessSecretWithFullAccess(
    "clemensv1", "{{{secret key}}");

var hubClient = NotificationHubClient.
    CreateClientFromConnectionString(cn, "myhub");

hubClient.SendTemplateNotification(new Dictionary<string, string>{
    { "msg", TextBox1.Text }});

3 lines. Three lines. No management of device ids. No public endpoint for the phone to talk to. Service Bus does all that. It really is worth playing with.

And here are all the key links ....

SDKs:

Categories:

January 15, 2013
@ 06:56 PM

File:ESB.pngThe basic idea of the Enterprise Service Bus paints a wonderful picture of a harmonious coexistence, integration, and collaboration of software services. Services for a particular general cause are built or procured once and reused across the Enterprise by ways of publishing them and their capabilities in a corporate services repository from where they can be discovered. The repository holds contracts and policy that allows dynamically generating functional adapters to integrate with services. Collaboration and communication is virtualized through an intermediary layer that knows how to translate messages from and to any other service hooked into the ESB like a babel fish in the Hitchhiker’s Guide to the Galaxy. The ESB is a bus, meaning it aspires to be a smart, virtualizing, mediating, orchestrating messaging substrate permeating the Enterprise, providing uniform and mediated access anytime and anywhere throughout today’s global Enterprise. That idea is so beautiful, it rivals My Little Pony. Sadly, it’s also about as realistic. We tried regardless.

As with many utopian ideas, before we can get to the pure ideal of an ESB, there’s some less ideal and usually fairly ugly phase involved where non-conformant services are made conformant. Until they are turned into WS-* services, any CICS transaction and SAP BAPI is fronted with a translator and as that skinning renovation takes place, there’s also some optimization around message flow, meaning messages get batched or de-batched, enriched or reduced. In that phase, there was also learning of the value and lure of the benefits of central control. SOA Governance is an interesting idea to get customers drunk on. That ultimately led to cheating on the ‘B’. When you look around and look at products proudly carrying the moniker ‘Enterprise Service Bus’ you will see hubs. In practice, the B in ESB is mostly just a lie. Some vendors sell ESB servers, some even sell ESB appliances. If you need to walk to a central place to talk to anyone, it’s a hub. Not a bus.

Yet, the bus does exist. The IP network is the bus. It turns out to suit us well on the Internet. Mind that I’m explicitly talking about “IP network” and not “Web” as I do believe that there are very many useful protocols beyond HTTP. The Web is obviously the banner example for a successful implementation of services on the IP network that does just fine without any form of centralized services other than the highly redundant domain name system.

Centralized control over services does not scale in any dimension. Intentionally creating a bottleneck through a centrally controlling committee of ESB machines, however far scaled out, is not a winning proposition in a time where every potential or actual customer carries a powerful computer in their pockets allowing to initiate ad-hoc transactions at any time and from anywhere and where we see vehicles, machines and devices increasingly spew out telemetry and accept remote control commands. Central control and policy driven governance over all services in an Enterprise also kills all agility and reduces the ability to adapt services to changing needs because governance invariably implies process and certification. Five-year plan, anyone?

If the ESB architecture ideal weren’t a failure already, the competitive pressure to adopt direct digital interaction with customers via Web and Apps, and therefore scale up not to the scale of the enterprise, but to scale up to the scale of the enterprise’s customer base will seal its collapse.

Service Orientation

While the ESB as a concept permeating the entire Enterprise is dead, the related notion of Service Orientation is thriving even though the four tenets of SOA are rarely mentioned anymore. HTTP-based services on the Web embrace explicit message passing. They mostly do so over the baseline application contract and negotiated payloads that the HTTP specification provides for. In the case of SOAP or XML-RPC, they are using abstractions on top that have their own application protocol semantics. Services are clearly understood as units of management, deployment, and versioning and that understanding is codified in most platform-as-a-service offerings.

That said, while explicit boundaries, autonomy, and contract sharing have been clearly established, the notion of policy-driven compatibility – arguably a political addition to the list to motivate WS-Policy as the time – has generally been replaced by something even more powerful: Code. JavaScript code to be more precise. Instead of trying to tell a generic client how to adapt to service settings by ways of giving it a complex document explaining what switches to turn, clients now get code that turns the switches outright. The successful alternative is to simply provide no choice. There’s one way to gain access authorization for a service, period. The “policy” is in the docs.

The REST architecture model is service oriented – and I am not meaning to imply that it is so because of any particular influence. The foundational principles were becoming common sense around the time when these terms were coined and as the notion of broadly interoperable programmable services started to gain traction in the late 1990s – the subsequent grand dissent that arose was around whether pure HTTP was sufficient to build these services, or whether the ambitious multi-protocol abstraction for WS-* would be needed. I think it’s fairly easy to declare the winner there.

Federated Autonomous Services

imageWindows Azure, to name a system that would surely be one to fit the kind of solution complexity that ESBs were aimed at, is a very large distributed system with a significant number of independent multi-tenant services and deployments that are spread across many data centers. In addition to the publicly exposed capabilities, there are quite a number of “invisible” services for provisioning, usage tracking and analysis, billing, diagnostics, deployment, and other purposes.  Some components of these internal services integrate with external providers. Windows Azure doesn’t use an ESB. Windows Azure is a federation of autonomous services.

The basic shape of each of these services is effectively identical and that’s not owing, at least not to my knowledge, to any central architectural directive even though the services that shipped after the initial wave certainly took a good look at the patterns that emerged. Practically all services have a gateway whose purpose it is to handle and dispatch and sometimes preprocess incoming network requests or sessions and a backend that ultimately fulfills the requests. The services interact through public IP space, meaning that if Service Bus wants to talk to its SQL Database backend it is using a public IP address and not some private IP. The Internet is the bus. The backend and its structure is entirely a private implementation matter.  It could be a single role or many roles.

Any gateway’s job is to provide network request management, which includes establishing and maintaining sessions, session security and authorization, API versioning where multiple variants of the same API are often provided in parallel, usage tracking, defense mechanisms, and diagnostics for its areas of responsibility. This functionality is specific and inherent to the service. And it’s not all HTTP. SQL database has a gateway that speaks the Tabular Data Stream protocol (TDS) over TCP, for instance, and Service Bus has a gateway that speaks AMQP and the binary proprietary Relay and Messaging protocols.

Governance and diagnostics doesn’t work by putting a man in the middle and watching the traffic coming by, which is akin to trying the tell whether a business is healthy by counting the trucks going to their warehouse. Instead we are integrating the data feeds that come out of the respective services and are generated fully knowing the internal state, and concentrate these data streams, like the billing stream, in yet other services that are also autonomous and have their own gateways. All these services interact and integrate even though they’re built by a composite team far exceeding the scale of most Enterprise’s largest projects, and while teams run on separate schedules where deployments into the overall system happen multiple times daily. It works because each service owns its gateway, is explicit about its versioning strategy, and has a very clear mandate to honor published contracts, which includes explicit regression testing. It would be unfathomable to maintain a system of this scale through a centrally governed switchboard service like an ESB.

Well, where does that leave “ESB technologies” like BizTalk Server? The answer is simply that they’re being used for what they’re commonly used for in practice. As a gateway technology. Once a service in such a federation would have to adhere to a particular industry standard for commerce, for instance if it would have to understand EDIFACT or X.12 messages sent to it, the Gateway would employ an appropriate and proven implementation and thus likely rely on BizTalk if implemented on the Microsoft stack. If a service would have to speak to an external service for which it would have to build EDI exchanges, it would likely be very cost effective to also use BizTalk as the appropriate tool for that outbound integration. Likewise, if data would have to be extracted from backend-internal message traffic for tracking purposes and BizTalk’s BAM capabilities would be a fit, it might be a reasonable component to use for that. If there’s a long running process around exchanging electronic documents, BizTalk Orchestration might be appropriate, if there’s a document exchange involving humans then SharePoint and/or Workflow would be a good candidate from the toolset.

For most services, the key gateway technology of choice is HTTP using frameworks like ASP.NET, Web API, probably paired with IIS features like application request routing and the gateway is largely stateless.

In this context, Windows Azure Service Bus is, in fact, a technology choice to implement application gateways. A Service Bus namespace thus forms a message bus for “a service” and not for “all services”. It’s as scoped to a service or a set of related services as an IIS site is usually scoped to one or a few related services. The Relay is a way to place a gateway into the cloud for services where the backend resides outside of the cloud environment and it also allows for multiple systems, e.g. branch systems, to be federated into a single gateway to be addressed from other systems and thus form a gateway of gateways. The messaging capabilities with Queues and Pub/Sub Topics provide a way for inbound traffic to be authorized and queued up on behalf of the service, with Service Bus acting as the mediator and first line of defense and where a service will never get a message from the outside world unless it explicitly fetches it from Service Bus. The service can’t be overstressed and it can’t be accessed except through sending it a message.

The next logical step on that journey is to provide federation capabilities with reliable handoff of message between services, meaning that you can safely enqueue a message within a service and then have Service Bus replicate that message (or one copy in the case of pub/sub) over to another service’s Gateway – across namespaces and across datacenters or your own sites, and using the open AMQP protocol. You can do that today with a few lines of code, but this will become inherent to the system later this year.

Categories: Architecture | SOA | Azure

Head over to my Subscribe! blog on Channel 9 for the latest episode on Transactions.

Categories:

A suggestion was made on mygreatwindowsazureidea.com for Windows Azure Service Bus to support distributed transactions. The item isn’t very popular on the site with 7 votes, but I know that that’s a topic near and dear to the heart of many folks writing business solutions. We in the Service Bus team owning MSMQ and the Workflow team next door owns DTC and we’re getting enough requests now that we’ll start working on better guidance around transactions in the coming months, some of which will come in form of clips on my Subscribe blog on Channel 9.

What’s not likely going to happen is that we will provide a magic “it just works” solution that brings DTC and the 2PC model to the cloud. Why? Because 2PC isn’t doing well in that world. Here is my reply to the post on mygreatwindowsazureidea.com for better linking:

Hi, I'm on the Service Bus team and I very much appreciate the intent of this suggestion.

I wish we could enable that easily, but unfortunately this is a hard problem.

The distributed transaction model with the common 2-phase-commit protocol with a central coordinator is very suitable as a convenient error management mechanism for physical single-node systems and for small clusters of a few physical nodes that are close together. As you get very serious about scale, virtualization and high availability, the very foundation of that model starts shaking.

For 2PC to work, the coordinator’s availability both in terms of compute but also in terms of network availability must be close to perfect. If you lose the coordinator or you lose sight of the coordinator and you have resources in ‘prepared’ state, there is no reasonable mechanism for those resources to break their promise and back out in 2PC. On premises, the solution to that is to cluster DTC with the Windows Clustering services on a shared, redundant disk array and have redundant networking to all resources. Unless you do exactly that, you’re not likely building a solution that survives a DTC hardware component failure without running in major trouble on the software side. Once you step into virtualized environments, a lot of the underlying assumptions of that cluster setup start to break down as the virtualization environment and placement strategies introduce new risk into the relationship between the clustered resources.

Likewise, the resource managers themselves are moved further away. You no longer have a tightly controlled system where everything runs in a rack and is on the same network segment with negligible latency. Things run scattered over many racks. The bias in virtualization environments and the cloud is system availability (i.e. the majority of nodes in a system is available) and not single-node reliability (i.e. nodes don't go down).

The 2PC model largely assumes that individual transactions go wrong due to intermittent issues and not due to losing random nodes completely and without notice. It obviously does provide a lifeline for when resource managers run into serious system issues as transactions are in progress, but it’s generally not very suitable for a world where workloads span many nodes and stuff goes up and down and moves all the time for the sake of overall system availability when that also includes the coordinator.

The result of using distributed transactions spanning multiple nodes in such an environment is, at worst and as explained by the CAP theorem, a complete gridlock as locks get placed and held and either take very long to resolve or end up leaving transactions in doubt requiring intervention.

Ultimately, MSDTC is a single-node/cluster and local-network technology, which also manifests in its security model that is fairly difficult to adapt to a multitenant cloud system.

Mind that I am by no means looking to cast any doubts over anyone's use of MSDTC within its design scope. MSDTC is proven and rock-solid reliable within those limits. When all resources are on one node or are close together, belong to a single tenant/app, and within a trust domain, it is and remains a great choice, because of the simplicity it provides around failure management, even for work spanning multiple resources inside a Windows Azure VM.

Due to these considerations, it's hard for us to support classic distributed transactions with DTC enlistment because people would justifiably expect them to "just work" - and it's hard to see how they would. Beyond that, I have serious concerns around system availability and security if locks on Service Bus' internal resources could be impacted by third parties by ways of having them enlisted in a transaction even if we were owning the coordinator.

That all said, we do have DTC support for MSMQ, which is also owned by the Service Bus team. The way to get DTC support for Service Bus is to proxy it with a local MSMQ queue and then do a reliable handoff to Service Bus with a pump. We already have a sample for that and we will framework that further:

http://code.msdn.microsoft.com/windowsazure/Service-Bus-Durable-Sender-0763230d

The considerations for Service Bus for Windows Server are similar.

Categories: