At the bottom of this post you’ll find the DinnerNow version that I’ve been using for my PDC09 talk. The video of that talk is now available at http://microsoftpdc.com/Sessions/SVC18 and I recommend that you listen to the talk for context.

The DinnerNow drop I’m sharing here is a customized version of the DinnerNow 3.1 version that’s up on CodePlex. If I were you, I’d install the original version and then unpack my zip file alongside of it and then use some kind of diff tool (the Windows SDK’s WinDiff tool is a start) to look at the differences between the versions. That will give you a raw overview of what I had to do. You’ll find that I had to add and move a few things, but that the app didn’t change in any radical way.

Remember that looking at the code is more important that making it run. There’s one particular challenge you’d have right now with the Windows Azure CTP and that’s getting the two (!) Windows Azure compute tokens needed for separating out the web and the service tier as I’ve done here. It’s not difficult to consolidate the Web and the Web Service tier into a single role, but since I had to do the migration within a short period of time, I chose to split them up.

FWIW, I time-boxed the migration to 3 work days – which included learning about what our buddies over in SQL Azure had done in the past months — and that turned out to be a comfortable fit in terms of time.

Another function of time-boxing is that you’re finding me disabling security on most endpoints, including disabling the Access Control integration with Service Bus for most endpoints by setting the relayClientAuthenticationType attribute on the respective binding elements to None.

I know that’s a sin, but I didn’t want to cause too much churn in the first iteration. The original version of DinnerNow is conveniently using Windows authentication/authorization for its communication paths. While that’s ok for a LAN setup, things get more complicated for an actual WAN setup that the DinnerNow scenario calls for. That would spawn a wholly different discussion that shines the spotlight on our Access Control service and why it’s useful – even required – for that scenario. In order not to overwhelm everyone, I left that out for this round and will revisit that aspect in the next weeks – or maybe one of our (aspiring?) MVPs or RDs will beat me to it.

I’m also going to work with the guys who wrote DinnerNow to find a way to host this modified version of Dinner Now with the on-premise runtime bits expressly not on my primary dev machine, where they’d live now.

 

Here what you need to do to get it to run

I know this is rough. Writing up the long version of this is going to take some time and I prefer getting the bits to you early over me sitting here writing pages of docs. Maybe you can even help ;-)

  1. First, you’ll need to go to the Windows Azure portal and get the SDKs and tokens/accounts. The Getting Started page has all the data and links you need so I’m not going to repeat them here in much detail. You will need at least one Windows Azure compute account (apply here), one SQL Azure account (apply here), and an AppFabric account (no application needed, just log in w/ LiveID). 
  2. Download and install the regular version DinnerNow 3.1 from Codeplex. This will drop a “Configure DinnerNow 3.1” shortcut on your desktop. Run that, install all prerequisites and make sure DinnerNow runs locally before you proceed.
  3. You will later need the databases that the setup created in your local SQLEXPRESS instance by setup. You’ll have to make a few changes, though.
    1. First, (download, install, and) open SQL Server Management Studio, connect to your SQL Server Express instance and switch to “SQL Server and Windows Authentication mode” on the Server Properties under Security. Then you’ll need to go to to the Security settings and either create a new account and grant it all rights on the aspnetdb database or just enable the ‘sa’ account and set its password. 
    2. Then you need to find the “SQL Server Configuration Manager” and enable TCP for your SQLEXPRESS instance like this. The default port will be 1433. If you have a full SQL Server instance on your dev machine and it’s configured for TCP the easiest is to suspend that for the moment and allow the SQLEXPRESS instance to squat the port.
  4. Unpack the ZIP file appended below into a directory on your machine. At this point it should be ok to override the existing DinnerNow directory, but I’d keep things side-by-side for reference. If you copy side-by-side, grab the ./solution/DinnerNow – Web/DinnerNow.WebUX/images/’ directory from your local installation and copy it into the location where you unzipped the file here. I left out the images due to their size. And just as with the normal DinnerNow installation you’ll find a solution file named “DinnerNow  - Main.sln” in the unpacked directory – open that in Visual Studio 2008 (not 2010!) because you’ll have to make some changes and edits.
  5. If you are lucky enough to have two Windows Azure compute accounts, you can skip this step. Otherwise, you will have to restructure the application a bit: 
    1. In the “DinnerNow – WA” solution branch where the Windows Azure deployment project reside you’ll have to consolidate the DinnerNow.WindowsAzure project and the DinnerNow.WindowsAzureAppSrv projects into one by replicating the DinnerNow.DBBridge reference into the DinnerNow.WindowsAzure project and abandoning/deleting the rest.
    2. In the “DinnerNow – Web” solution branch you will have to modify the DinnerNow.WebUX project by merging the DinnerNow.ServiceHost project from the “DinnerNow -ServicePortfolio2” branch into it, including merging the config files. In the original DinnerNow the hosting default is that the ServiceHost  project lives in the ./services subdirectory of the WebUX app. You can also do it that way, but you’ll have to change the respective client URIs to point to the right path.
  6. In the ./database directory is a file called SQLAzureImport.sql. That’s the exported and customized script for the DinnerNow restaurants and menus database. Create a new database (1GB is enough) and load the DB with this script. You can do this with the command line or with SQL Management Studio. The SQL Azure docs will tell you how.
  7. Now you’ll need to do a range of search/replace steps across the whole project. These are mostly in *.config files - a few places are in the code, which I count as bugs, but those are faithfully carried over from the original:
    1. Find all occurrences of sqlazure-instance and replace them with your unqualified SQL Azure server name (might look like this: tn0a1b2c3d)
    2. Find all occurrences of sqlazure-dbname and replace them with your SQL Azure database name
    3. Find all occurrences of sqlazure-acct and replace them with your SQL Azure administrator username
    4. Find all occurrences of sqlazure-password and replace them with your SQL Azure administrator password
    5. Find all occurrences of appfabricservicebus-ns and replace them with your unqualified AppFabric namespace name
    6. Find all occurrences of appfabricservicebus-key and replace them with your AppFabric Service Bus issuer key
    7. Find all occurrences of windowsazuresvcrole-acct and replace them with the name of your Windows Azure compute account. If you have just one, use that (given you’ve done the rework in step 4), if you have two use the account-name where you will host the service tier.
    8. Find all occurrences of sqlserver-password and replace them with your local SQL Server Express instance’s ‘sa’ account password.
  8. Do a full batch Rebuild of the whole project
  9. Go to the “DinnerNow –WA” solution and publish the project(s) to your Windows Azure compute account(s). If you had to consolidate them you’ll have one package to deploy, if you left things as they are you’ll have two packages to deploy. You can also run these packages in the local DevFabric to test things out.
  10. The executables you need to run are going to be dropped into the .\bin directory by the build. You need to run all 6 apps – but you could run them on 6 different machines – the two workflow hosts each assume the local presence of the DinnerNowWF database:
    1. CloudTraceRecorder.exe – this is the simple event listener app. You can run this right away to observe the apps starting up inside of Azure as they write events to the event listener. You can and should run this as you deploy. You can run any number of instances of CloudTraceRecorder anywhere.
    2. PortBridge.exe – this is the on-premise bridge-head for bridging to your local SQL Server Express instance so that the cloud application can get at its membership database that you host for it on your machine. After the search/replace steps you will notice that you have modified connection strings that point to a SQL Server role peeking out of your *AppSrv role. The secret ingredient is in the DinnerNow.DBBridge role that’s listening for TCP connections on behalf of your on-premise SQL Server and that connects them down to your local server with the logic in Microsoft.Samples.ServiceBus.Connections. This is the same code that’s in PortBridge.
    3. DinnerNow.OrderProcessingHost.exe is the (new) host application for the workflow that handles the order process.
    4. DinnerNow.RestaurantProcessingHost.exe is the (new) host application for the workflow that handles the restaurant process.
    5. DinnerNowKiosk.exe is the only slightly modified version of the DinnerNow in-restaurant kiosk
    6. Not in .\bin but rather to be started/deployed from VS is the also just slightly modified Windows Mobile app for the delivery app

 

Please also mind that the DinnerNow Powershell support and the other test and diagnostics capabilities haven’t been touched here, yet.

Oh, and … this is provided as-is … I’ll do my best to discuss some of the patterns over the next several weeks, but I don’t have time to provide 1:1 support.

Here’s the code:

DinnerNow-SVC18-PDC09.zip (2.35 MB)
Categories: .NET Services | Azure | Talks | AppFabric

November 18, 2009
@ 05:37 PM

Building “hybrid” cloud applications where parts of an an app lives up in a cloud infrastructure and other parts of the infrastructure live at a hosting site, or a data center, or even in your house ought to be simple – especially in this day and age of Web services. You create a Web service, make it accessible through your firewall and NAT, and the the cloud-hosted app calls it. That’s as easy as it ought to be.

Unfortunately it’s not always that easy. If the server sits behind an Internet connection with dynamically assigned IP addresses, if the upstream ISP is blocking select ports, if it’s not feasible to open up inbound firewall ports, or if you have no influence over the infrastructure whatsoever, reaching an on-premise service from the cloud (or anywhere else) is a difficult thing to do. For these scenarios (and others) our team is building the Windows Azure platform AppFabric Service Bus (friends call us just Service Bus).

Now – the Service Bus and the client bits in the Microsoft.ServiceBus.dll assembly are great if you have services can can be readily hooked up into the Service Bus because they’re built with WCF. For services that aren’t built with WCF, but are at least using HTTP, I’ve previously shown a way to hook them into Service Bus and have also demoed an updated version of that capability at Sun’s Java One. I’ll release an update for those bits tomorrow after my talk at PDC09 – the version currently here on my blog (ironically) doesn’t play well with SOAP and also doesn’t have rewrite capabilities for WSDL. The new version does.

But what if your service isn’t a WCF service or doesn’t speak HTTP? What if it speaks SMTP, SNMP, POP, IMAP, RDP, TDS, SSH, ETC?

Introducing Port Bridge

“Port Bridge” – which is just a descriptive name for this code sample, not an attempt at branding – is a point-to-point tunneling utility to help with these scenarios. Port Bridge consists of two components, the “Port Bridge Service” and the “Port Bridge Agent”. Here’s a picture:

image

The Agent’s job is to listen for and accept TCP or Named Pipe connections on a configurable port or local pipe name. The Service’s job is to accept for incoming connections from the Agent, establish a duplex channel with the Agent, and pump the data from the Agent to the actual listening service – and vice versa. It’s actually quite simple. In the picture above you see that the Service is configured to connect to a SQL Server listening at the SQL Server default port 1433 and that the Agent – running on a different machine, is listening on port 1433 as well, thus mapping the remote SQL Server onto the Agent machine as if it ran there. You can (and I think of that as to be more common) map the service on the Agent to any port you like – say higher up at 41433.

In order to increase the responsiveness and throughput for protocols that are happy to kill and reestablish connections such as HTTP does, “Port Bridge” is always multiplexing concurrent traffic that’s flowing between two parties on the same logical socket. When using Port Bridge to bridge to a remote HTTP proxy that the Service machine can see, but the Agent machine can’t see (which turns out to be the at-home scenario that this capability emerged from) there are very many and very short-lived connections being tunneled through the channel. Creating a new Service Bus channel for each of these connections is feasible – but not very efficient. Holding on to a connection for an extended period of time and multiplexing traffic over it is also beneficial in the Port Bridge case because it is using the Service Bus Hybrid connection mode by default. With Hybrid, all connections are first established through the Service Bus Relay and then our bits do a little “NAT dance” trying to figure out whether there’s a way to connect both parties with a direct socket – if that works the connection gets upgraded to the most direct connections in-flight. The probing, handshake, and upgrade of the socket may take 2-20 seconds and there’s some degree of luck involved to get that direct socket established on a very busy NAT – and thus we want to maximize the use of that precious socket instead of throwing it away all the time.

That seems familiar?!

You may notice that SocketShifter (built by our friends at AWS in the UK) is quite similar to Port Bridge. Even though the timing of the respective releases may not suggest it, Port Bridge is indeed Socketshifter’s older brother. Because we couldn’t make up our mind on whether to release Port Bridge for a while, I had AWS take a look at the service contract shown below and explained a few principles that I’m also explaining here and they had a first version of Socketshifter running within a few hours. There’s nothing wrong with having two variants of the same thing.

How does it work?

Since I’m publishing this as a sample, I obviously need to spend a little time on the “how”, even I’ll limit that here and will explain that in more detail in a future post. At the heart of the app, the contract that’s used between the Agent and the Service is a simple duplex WCF contract:

    [ServiceContract(Namespace="n:", Name="idx", CallbackContract=typeof(IDataExchange), SessionMode=SessionMode.Required)]
    public interface IDataExchange
    {
        [OperationContract(Action="c", IsOneWay = true, IsInitiating=true)]
        void Connect(string i);
        [OperationContract(Action = "w", IsOneWay = true)]
        void Write(TransferBuffer d);
        [OperationContract(Action = "d", IsOneWay = true, IsTerminating = true)]
        void Disconnect();
    }

There’s a way to establish a session, send data either way, and close the session. The TransferBuffer type is really just a trick to avoid extra buffer copies during serialization for efficiency reasons. But that’s it. The rest of Port Bridge is a set of queue-buffered streams and pumps to make the data packets flow smoothly and to accept inbound sockets/pipes and dispatch them out to the proxied services. What’s noteworthy is that Port Bridge doesn’t use WCF streaming, but sends data in chunks – which allows for much better flow control and enables multiplexing.

Now you might say You are using a WCF ServiceContract? Isn’t that using SOAP and doesn’t that cause ginormous overhead? No, it doesn’t. We’re using the WCF binary encoder in session mode here. That’s about as efficient as you can get it on the wire with serialized data. The per-frame SOAP overhead for net.tcp with the binary encoder in session mode is in the order of 40-50 bytes per message because of dictionary-based metadata compression. The binary encoder also isn’t doing any base64 trickery but treats binary as binary – one byte is one byte. Port Bridge is using a default frame size of 64K (which gets filled up in high-volume streaming cases due to the built-in Nagling support) and so we’re looking at an overhead of far less than 0.1%. That’s not shabby.

How do I use it?

This is a code sample and thus you’ll have to build it using Visual Studio 2008. You’ll find three code projects: PortBridge (the Service), PortBridgeAgent (the Agent), and the Microsoft.Samples.ServiceBus.Connections assembly that contains the bulk of the logic for Port Bridge. It’s mostly straightforward to embed the agent side or the service side into other hosts and I’ll show that in a separate post.

Service

The service’s exe file is “PortBridge.exe” and is both a console app and a Windows Service. If the Windows Service isn’t registered, the app will always start as a console app. If the Windows Service is registered (with the installer or with installutil.exe) you can force console-mode with the –c command line option.

The app.config file on the Service Side (PortBridge/app.config, PortBridge.exe.config in the binaries folder) specifies what ports or named pipes you want to project into Service Bus:

  <portBridge serviceBusNamespace="mynamespace" serviceBusIssuerName="owner" serviceBusIssuerSecret="xxxxxxxx" localHostName="mybox">
    <hostMappings>
      <add targetHost="localhost" allowedPorts="3389" />
    </hostMappings>
  </portBridge>

The serviceBusNamespace attribute takes your Service Bus namespace name, and the serviceBusIssuerSecret the respective secret. The serviceBusIssuerName should remain “owner” unless you know why you want to change it. If you don’t have an AppFabric account you might not understand what I’m writing about: Go make one

The localHostName attribute is optional and when set, it’s the name that’s being used to map “localhost” into your Service Bus namespace. By default the name that’s being used is the good old Windows computer-name.

The hostMappings section contains a list of hosts and rules for what you want to project out to Service Bus. Mind that all inbound connections to the endpoints generated from the host mappings section are protected by the Access Control service and require a token that grants access to your namespace – which is already very different from opening up a port in your firewall. If you open up port 3389 (Remote Desktop) through your firewall and NAT, everyone can walk up to that port and try their password-guessing skills. If you open up port 3389 via Port Bridge, you first need to get through the Access Control gate before you can even get at the remote port.

New host mappings are added with the add element. You can add any host that the machine running the Port Bridge service can “see” via the network. The allowedPorts and allowedPipes attributes define with TCP ports and/or which local named pipes are accessible. Examples:

  • <add targetHost="localhost" allowedPorts="3389" /> project the local machine into Service Bus and only allow Remote Desktop (3389)
  • <add targetHost="localhost" allowedPorts="3389,1433" /> project the local machine into Service Bus and allow Remote Desktop (3389) and SQL Server TDS (1433)
  • <add targetHost="localhost" allowedPorts="*" /> project the local machine into Service Bus and only allow any TCP port connection
  • <add targetHost="localhost" allowedPipes="sql/query" /> project the local machine into Service Bus and allow no TCP connections but all named pipe connections to \.\pipes\sql\query
  • <add targetHost="otherbox" allowedPorts="1433" /> project the machine “otherbox” into Service Bus and allow SQL Server TDS connections via TCP

Agent

The agent’s exe file is “PortBridgeAgent.exe” and is also both a console app and a Windows Service.

The app.config file on the Agent side (PortBridgeAgent/app.config, PortBridgeAgent.exe.config in the binaries folder) specifies which ports or pipes you want to project into the Agent machine and whether and how you want to firewall these ports. The firewall rules here are not interacting with your local firewall. This is an additional layer of protection.

  <portBridgeAgent serviceBusNamespace="mysolution" serviceBusIssuerName="owner" serviceBusIssuerSecret="xxxxxxxx">
    <portMappings>
      <port localTcpPort="13389" targetHost="mymachine" remoteTcpPort="3389">
        <firewallRules>
          <rule source="127.0.0.1" />
          <rule sourceRangeBegin="10.0.0.0" sourceRangeEnd="10.255.255.255" />
        </firewallRules>
      </port>
    </portMappings>
  </portBridgeAgent>

Again, the serviceBusNamespace attribute takes your Service Bus namespace name, and the serviceBusIssuerSecret the respective secret.

The portMappings collection holds the individual ports or pipes you want to bring onto the local machine. Shown above is a mapping of Remote Desktop (port 3389 on the machine with the computer name or localHostName ‘mymachine’) to the local port 13389. Once Service and Agent are running, you can connect to the agent machine on port 13389 using the Remote Desktop client – with PortBridge mapping that to port 3389 on the remote box.

The firewallRules collection allows (un-)constraining the TCP clients that may connect to the projected port. By default, only connections from the same machine are permitted.

For named pipes, the configuration is similar, even though there are no firewall rules and named pipes are always constrained to local connectivity by a set of ACLs that are applied to the pipe. Pipe names must be relative. Here’s how a named pipe projection of a default SQL Server instance could look like:

     <port localPipe="sql/remote" targetHost="mymachine" remotePipe="sql/query"/>

There’s more to write about this, but how about I let you take a look at the code first. I’ve also included two setup projects that can easily install Agent and Service as Windows Services. You obviously don’t have to use those.

[Updated archive (2010-06-10) fixing config issue:]

PortBridge20100610.zip (90.99 KB)
Categories: .NET Services | Azure | ISB