This post explains an essential class for asynchronous programming that lurks in the depths of the WCF samples: InputQueue<T>. If you need to write efficient server-side apps, you should consider reading through this and add InputQueue<T> to your arsenal. 

Let me start with: This blog post is 4 years late. Sorry! – and with that out of the way:

The WCF samples ship with several copies of a class that’s marked as internal in the System.ServiceModel.dll assembly: InputQueue<T>. Why are these samples – mostly those implementing channel-model extensions – bringing local copies of this class with them? It’s an essential tool for implementing the asynchronous call paths of many aspects of channels correctly and efficiently.

If you look closely enough, the WCF channel infrastructure resembles the Berkeley Socket model quite a bit – especially on the server side. There’s a channel listener that’s constructed on the server side and when that is opened (usually under the covers of the WCF ServiceHost) that operation is largely equivalent to calling ‘listen’ on a socket – the network endpoint is ready for business.  On sockets you’ll then call ‘accept’ to accept the next available socket connection from a client, in WCF you call ‘AcceptChannel’ to accept the next available (session-) channel. On sockets you then call ‘receive’ to obtain bytes, on a channel you call ’Receive’ to  obtain a message.

Before and between calls to '’AcceptChannel’ made by the server-side logic,  client-initiated connections – and thus channels – may be coming in and queue up for a bit before they handed out to the next caller of ‘AcceptChannel’, or the asynchronous equivalent ‘Begin/EndAcceptChannel’ method pair. The number of channels that may be pending is configured in WCF with the ‘ListenBacklog’ property that’s available on most bindings.

I wrote ‘queue up’ there since that’s precisely what happens – those newly created channels on top of freshly accepted sockets or HTTP request channels are enqueued into an InputQueue<T> instance and (Begin-)Accept is implemented as a dequeue operation on that queue. There are two particular challenges here that make the regular Queue<T> class from the System.Collections.Generic namespace unsuitable for use in the implementation of that mechanism: Firstly, the Dequeue method there is only available as a synchronous variant and also doesn’t allow for specifying a timeout. Secondly, the queue implementation doesn’t really help much with implementing the ListenBacklog quota where not only the length of the queue is limited to some configured number of entries, but accepting further connections/channels from the underlying network is also suspended for as long as the queue is at capacity and needs to resume as soon as the pressure is relieved, i.e. a caller takes a channel out of the queue.

To show that InputQueue<T> is a very useful general purpose class even outside of the context of the WCF channel infrastructure, I’ve lifted a version of it from one of the most recent WCF channel samples, made a small number of modifications that I’ll write about later, and created a little sample around it that I’ve attached to this post.

The sample I’ll discuss here is simulating parsing/reading IP addresses from a log-file and then performing a reverse DNS name resolution on those addresses – something that you’d do in a web-server log-analyzer or as the background task in a blog engine wile preparing statistics.

Reverse DNS name resolution is quite interesting since it’s embarrassingly easy to parallelize and each resolution commonly takes a really long time (4-5 seconds) –whereby all the work is done elsewhere. The process issuing the queries is mostly sitting around idle waiting for the response.  Therefore, it’s a good idea to run a number of DNS requests in parallel, but it’s a terrible idea to have any of these requests execute as a blocking call and burning a thread. Since we’re assuming that we’re reading from a log file that requires some parsing, it would also be a spectacularly bad idea to have multiple concurrent threads compete for access to that file and get into each other’s way. And since it is a file and we need to lift things up from disk, we probably shouldn’t do that ‘just in time’ as a DNS resolution step is done, but there should rather be some data readily waiting for processing.  InputQueue<T> is enormously helpful in such a scenario.

The key file of the sample code – the implementation of the queue itself aside – is obviously Program.cs. Here’s Main() :

static void Main(string[] args)
{
    int maxItemsInQueue = 10;
    InputQueue<IPAddress> logDataQueue = new InputQueue<IPAddress>();
    int numResolverLoops = 20;
    ManualResetEvent shutdownCompleteEvent = new ManualResetEvent(false);
    List<IPAddressResolverLoop> resolverLoops = new List<IPAddressResolverLoop>();
 
    Console.WriteLine("You can stop the program by pressing ENTER.");

We’re setting up a new InputQueue<IPAddress> here into which we’ll throw the parsed addresses from our acquisition loop that simulates reading from the log. The queue’s capacity will be limited to just 10 entries (maxItemsInQueue is the input value) and we will run 20 'resolver loops’, which are logical threads that process IP-to-hostname resolution steps.

    Console.WriteLine("You can stop the program by pressing ENTER.");
 
    // set up the loop termination callback
    WaitCallback loopTerminationCallback = o =>
    {
        if (Interlocked.Decrement(ref numResolverLoops) == 0)
        {
            shutdownCompleteEvent.Set();
        }
    };
 
    // set up the resolver loops
    for (int loop = 0; loop < numResolverLoops; loop++)
    {
        // add the resolver loop 'i' and set the done flag when the
        // last of them terminates
        resolverLoops.Add(
            new IPAddressResolverLoop(
                logDataQueue, loop, 
                loopTerminationCallback, null));
    }

Next we’re kicking off the resolver loops – we’ll look at these in detail a bit later. We’ve got a ManualResetEvent lock object that guards the program’s exit until all these loops have completed and we’re going to set that to signaled once the last loop completes – that’s what the loopTerminationCallback anonymous method is for.  We’re registering the method with each of the loops and as they complete the method gets called and the last call sets the event. Each loop gets a reference to the logDataQueue from where it gets its work.

   // set up the acquisition loop; the loop auto-starts
    using (LogDataAcquisitionLoop acquisitionLoop =
        new LogDataAcquisitionLoop(logDataQueue, maxItemsInQueue))
    {
        // hang main thread waiting for ENTER
        Console.ReadLine();
        Console.WriteLine("*** Shutdown initiated.");
    }

Finally we’re starting the acquisition loop that gets the data from the log file. The loop gets a reference to the logDataQueue where it places the acquired items and it’s passed the maxItemsInQueue quota that governs how many items may be read ahead into the queue. Once the user presses the ENTER key, the acquisition loop object is disposed by ways of exiting the using scope, which stops the loop.

    // shut down the queue; the resolvers will auto-close
    // as the queue drains. We don't need to close them here.
    logDataQueue.Shutdown();
 
    // wait for all work to complete
    shutdownCompleteEvent.WaitOne();
}

Lastly, the queue is shut down (by fittingly calling Shutdown). Shutdown closes the queue (all further enqueue operations are absorbed) and causes all pending readers for which no more entries are available on the queue to unblock immediately  and return null. The resolver loops will complete their respective jobs and will terminate whenever they dequeue null from the queue. As they terminate, they call the registered termination callback (loopTerminationCallback from above) and that will eventually cause shutdownCompletedEvent to become signaled as discussed above.

The log-reader simulator isn’t particularly interesting for this sample, even though one of the goodies is that the simulation executes on an I/O completion port instead of a managed thread-pool thread – that’s another blog post. The two methods of interest are Begin/EndGetLogData – all that’s of interest here is that EndGetLogData returns an IPAddress that’s assumed to be parsed out of a log.

class IPAddressLogReaderSimulator
{
    public IAsyncResult BeginGetLogData(AsyncCallback callback, object data);
    public IPAddress EndGetLogData(IAsyncResult result);
}

The simulator is used internally  by the LogDataAcquisitionLoop class – which we’ll drill into because it implements the throttling mechanism on the queue.

class LogDataAcquisitionLoop : IDisposable
{
    readonly IPAddressLogReaderSimulator ipAddressLogReaderSimulator;
    readonly InputQueue<IPAddress> logDataQueue;
    int maxItemsInQueue;
    int readingSuspended;
    bool shuttingDown;
 
    public LogDataAcquisitionLoop(InputQueue<IPAddress> logDataQueue, int maxItemsInQueue)
    {
        this.logDataQueue = logDataQueue;
        this.maxItemsInQueue = maxItemsInQueue;
        this.shuttingDown = false;
        this.ipAddressLogReaderSimulator = new IPAddressLogReaderSimulator();
        this.ipAddressLogReaderSimulator.BeginGetLogData(this.LogDataAcquired, null);
    }

The constructor sets up the shared state of the loop and kicks off the first read operation on the simulator. Once BeginGetLogData has acquired the first IPAddress (which will happy very quickly), the LogDataAcquired callback method will be invoked. 

    void LogDataAcquired(IAsyncResult result)
    {
        IPAddress address = this.ipAddressLogReaderSimulator.EndGetLogData(result);
 
        Console.WriteLine("-- added {0}", address);
        this.logDataQueue.EnqueueAndDispatch(address, this.LogDataItemDequeued);
        if (!this.shuttingDown && this.logDataQueue.PendingCount < this.maxItemsInQueue)
        {
            this.ipAddressLogReaderSimulator.BeginGetLogData(this.LogDataAcquired, null);
        }
        else
        {
            // the queue will be at the defined capacity, thus abandon 
            // the read loop - it'll be picked up by LogDataItemDequeued
            // as the queue pressure eases
            Interlocked.Exchange(ref this.readingSuspended, 1);
            Console.WriteLine("-- suspended reads");
        }
    }

The callback method gets the IPAddress and puts it into the queue – using the InputQueue<T>.EnqueueAndDispatch(T, Action) method. There are two aspects that are quite special about that method when compared to the regular Queue<T>.Enqueue(T) method. First, it does take a callback as the second argument alongside the item to be enqueued; second, the method name isn’t just Enqueue, it also says Dispatch.

When EnqueueAndDispatch() is called, the item and the callback get put into an internal item queue – that’s the ‘enqueue’ part. As we will see in context a bit later in this post, the ‘dequeue’ operation on the queue is the BeginDequeue/EndDequeue asynchronous method call pair. There can be any number of concurrent BeginDequeue requests pending on the queue. ‘Pending’ means that the calls – rather their async callbacks and async state – are registered in another queue internal to InputQueue<T> that preserves the call order. Thus, BeginDequeue always only puts the async callback and async state into that queue and returns afterwards. There is no thread spun or hung. That’s all it does. 

As things go, the best opportunity to service a pending dequeue operation on a queue is when an item is being enqueued. Consequently, EnqueueAndDispatch() will first put the item into the internal queue and will then look whether there are registered waiters and/or readers – waiters are registered by ‘(Begin-)WaitForItem’, readers are registered by ‘(Begin-)Dequeue’. Since it’s known that there a new item in the queue now, the operation will iterate overall waiters and complete them – and does so by invoking their async callbacks, effectively lending the  enqueue operation’s thread to the waiters. If there’s at least one pending reader, it’ll then pop a message from the head of the internal item queue and call the reader’s async callback, lending the enqueue operation’s thread to processing of the dequeue operation. If that just made your head spin – yes, the item may have been dequeued and processed as EnqueueAndDispatch returns.

There is an overload for EnqueueAndDispatch() that takes an extra boolean parameter that lets you cause the dispatch operation to happen on a different thread, and there is also a EnqueueWithoutDispatch() method that just won’t dispatch through and a standalone Dispatch() method. 

The callback supplied to EnqueueAndDispatch(), here the LogDataItemDequeued method, is am Action delegate. The queue will call this callback as the item is being dequeued and, more precisely, when the item has been removed from the internal item queue, but just before it is returned to the caller. That turns out to be quite handy. If you take another look at the LogDataAcquired method you’ll notice that we’ve got two alternate code paths after EnqueueAndDispatch(). The first branch is called when the queue has not reached capacity and it’s not shutting down. When that’s so, we’re scheduling getting the next log item – otherwise we don’t. Instead, we set the readingSuspended flag and quit – effectively terminating and abandoning the loop. So how does that get restarted when the queue is no longer at capacity? The LogDataItemDequeued callback!

    void LogDataItemDequeued()
    {
        // called whenever an item is dequeued. First we check 
        // whether the queue is no longer full after this 
        // operation and the we check whether we need to resume
        // the read loop.
        if (!this.shuttingDown &&
            this.logDataQueue.PendingCount < this.maxItemsInQueue &&
            Interlocked.CompareExchange(ref this.readingSuspended, 0, 1) == 1)
        {
            Console.WriteLine("-- resuming reads");
            this.ipAddressLogReaderSimulator.BeginGetLogData(this.LogDataAcquired, null);
        }
    }

The callback gets called for each item that gets dequeued. Which means that we’ll get an opportunity to restart the loop when it’s been stalled because the queue reached capacity. So we’re checking here whether the queue isn’t shuttong down and whether it’s below capacity and if that’s so and the readingSuspended flag is set, we’re  restarting the read loop. And that’s how the throttle works.

So now we’ve got the data from the log in the queue and we’re throttling nicely so that we don’t pull too much data into memory. How about taking a look at the DNS resolver loops that process the data?

class IPAddressResolverLoop : IDisposable
{
    readonly InputQueue<IPAddress> logDataQueue;
    readonly int loop;
    readonly WaitCallback loopCompleted;
    readonly object state;
    bool shutdown;
 
    public IPAddressResolverLoop(InputQueue<IPAddress> logDataQueue, int loop, WaitCallback loopCompleted, object state)
    {
        this.logDataQueue = logDataQueue;
        this.loop = loop;
        this.loopCompleted = loopCompleted;
        this.state = state;
        this.logDataQueue.BeginDequeue(TimeSpan.MaxValue, this.IPAddressDequeued, null);
    }

This loop is also implemented as a class and the fields hold shared that that’s initialized in the constructor. This loop also auto-starts and does so by calling BeginDequeue on the input queue. As stated above, BeginDequeue  commonly just parks the callback and returns.

    void IPAddressDequeued(IAsyncResult ar)
    {
        IPAddress address = this.logDataQueue.EndDequeue(ar);
        if (!this.shutdown && address != null)
        {
            Console.WriteLine("-- took {0}", address);
            Dns.BeginGetHostEntry(address, this.IPAddressResolved, new object[] { Stopwatch.StartNew(), address });
        }
        else
        {
            this.loopCompleted(this.state);
        }
    }

As an IPAddress is becomes available on the queue, the callback is being invoked and that’s quite likely on a thread lent by EnqueueAndDispatch() and therefore sitting  on the thread the log file generator is using to call back for completion of the BeginGetLogData method if you trace things back. If we get an address and the value isn’t null, we’ll then proceed to schedule the DNS lookup via Dns.BeginGetHostEntry. Otherwise we’ll terminate the loop and call the loopCompleted callback. In Main() that’s the anonymous method that counts down the loop counter and signals the event when it falls to zero.

    void IPAddressResolved(IAsyncResult ar)
    {
        var args = ((object[])ar.AsyncState);
        var stopwatch = (Stopwatch)args[0];
        var address = (IPAddress)args[1];
 
        stopwatch.Stop();
        double msecs = stopwatch.ElapsedMilliseconds;
 
        try
        {
            IPHostEntry entry = Dns.EndGetHostEntry(ar);
            Console.WriteLine("{0}: {1} {2}ms", this.loop, entry.HostName, msecs);
        }
        catch (SocketException)
        {
            // couldn't resolve. print the literal address
            Console.WriteLine("{0}: {1} {2}ms", this.loop, address, msecs);
        }
        // done with this entry, get the next
        this.logDataQueue.BeginDequeue(TimeSpan.MaxValue, this.IPAddressDequeued, null);
    }

The IPAddressResolved method just deals with the mechanics of printing out the result of the lookup and then schedules another BeginDequeue call to start the next iteration.

Summary: The enabler for and the core piece of the implementation of this scenario is InputQueue<T> – the dequeue-callback enables implementing throttling effectively and the dispatch logic provides an efficient way to leverage threads in applications that leverage asynchronous programming patterns, especially in I/O driven situations as illustrated here.

And last but not least – here’s teh codez; project file is for VS2010, throw the files into a new console app for VS2008 and mark the project to allow unsafe code (for the I/O completion thread pool code).

UsingInputQueue.zip (13.85 KB) 

or if you'd rather have a version of InputQueue that is using the regular thread pool, download the WCF samples and look for InputQueue.cs.

[The sample code posted here is subject to the Windows SDK sample code license]

Categories: Architecture | CLR | WCF

Book cover of Programming WCF Services

Juval Löwy’s very successful WCF book is now available in its third edition – and Juval asked me to update the foreword this time around. It’s been over three years since I wrote the foreword to the first edition and thus it was time for an update since WCF has moved on quite a bit and the use of it in the customer landscape and inside of MS has deepened where we’re building a lot of very interesting products on top of the WCF technology across all businesses – not least of which is the Azure AppFabric Service Bus that I work on and that’s entirely based on WCF services.

You can take a peek into the latest edition at the O’Reilly website and read my foreword if you care. To be clear: It’s the least important part of the whole book :-)

Categories: AppFabric | Azure | WCF | Web Services

In case you need a refresher or update about the things me and our team work on at Microsoft, go here for a very recent and very good presentation by my PM colleague Maggie Myslinska from TechEd Australia 2010 about Windows Azure AppFabric with Service Bus demos and a demo of the new Access Control V2 CTP

Categories: AppFabric | SOA | Azure | Technology | ISB | WCF | Web Services

April 3, 2009
@ 05:09 PM

XML-RPC for WCF Download here

I had updated my WCF XML-RPC stack for PDC’08 but never got around to post it (either too busy or too lazy when not busy). The updated source code is attached to this post.

Contrary to the code that I’ve posted a while back, the new XML-RPC implementation is no longer a binding with a special encoder, but is implemented entirely as a set of behaviors and extensions for the WCF Service Model. The behavior will work with WCF 3.5 as it ships in the framework and also with the .NET Service Bus March 2009 CTP.

The resulting Service Model programming experience is completely "normal". That means you can also expose the XML-RPC contracts as SOAP endpoints with all the advanced WCF bindings and features if you like. The behaviors support client and service side. I stripped the config support from this version – I’ll add that back once I get around to it. Here's a snippet from the MetaWeblog contract:

  1: [ServiceContract(Namespace = http://www.xmlrpc.com/metaWeblogApi)]
  2: public interface IMetaWeblog : IBlogger
  3: {
  4:    [OperationContract(Action="metaWeblog.editPost")]
  5:    bool metaweblog_editPost(string postid,
  6:                              string username,
  7:                              string password,
  8:                              Post post,
  9:                              bool publish);
 10: 
 11:    [OperationContract(Action="metaWeblog.getCategories")]
 12:    CategoryInfo[] metaweblog_getCategories( string blogid,
 13:                                             string username,
 14:                                             string password);
 15:     ...
 16: 
 17: }

Setting up the endpoint is very easy. Pick the WebHttpBinding (or the WebHttpRelayBinding for .NET Service Bus), create an endpoint, add the XmlRpcEndpointBehavior to the endpoint and you’re good to go.

  1: Uri baseAddress = new UriBuilder(Uri.UriSchemeHttp, Environment.MachineName, -1, "/blogdemo/").Uri;
  2: 
  3: ServiceHost serviceHost = new ServiceHost(typeof(BloggerAPI));
  4: var epXmlRpc = serviceHost.AddServiceEndpoint(
  5:                   typeof(IBloggerAPI), 
  6:                   new WebHttpBinding(WebHttpSecurityMode.None), 
  7:                   new Uri(baseAddress, "./blogger"));
  8: epXmlRpc.Behaviors.Add(new XmlRpcEndpointBehavior());

The client is just as simple:

  1: Uri blogAddress = new UriBuilder(Uri.UriSchemeHttp, Environment.MachineName, -1, "/blogdemo/blogger").Uri;
  2:             
  3: ChannelFactory<IBloggerAPI> bloggerAPIFactory = 
  4:      new ChannelFactory<IBloggerAPI>(
  5:              new WebHttpBinding(WebHttpSecurityMode.None), 
  6:              new EndpointAddress(blogAddress));
  7: bloggerAPIFactory.Endpoint.Behaviors.Add(new XmlRpcEndpointBehavior());
  8: 
  9: IBloggerAPI bloggerAPI = bloggerAPIFactory.CreateChannel();
 10: 

For your convenience I've included complete Blogger, MetaWeblog, and MovableType API contracts along with the respective data types in the test applications. The test app is a small in-memory blog that you can use with the blogging function of Word 2007 or Windows Live Writer or some other blogging client for testing.

Of the other interesting XML-RPC APIs, the Pingback API has the following contract:

  1:  [ServiceContract(Namespace="http://www.hixie.ch/specs/pingback/pingback")]
  2:  public interface IPingback
  3:  {
  4:      [OperationContract(Action="pingback.ping")]
  5:      string ping(string sourceUri, string targetUri);
  6:  }

and the WeblogUpdates API looks like this:

  1:     [DataContract]
  2:     public struct WeblogUpdatesReply
  3:     {
  4:         [DataMember]
  5:         public bool flerror;
  6:         [DataMember]
  7:         public string message;
  8:     }
  9: 
 10:     [ServiceContract]
 11:     public interface IWeblogUpdates
 12:     {
 13:         [OperationContract(Action = "weblogUpdates.extendedPing")]
 14:         WeblogUpdatesReply ExtendedPing(string weblogName, string weblogUrl, string checkUrl, string rssUrl);
 15:         [OperationContract(Action="weblogUpdates.ping")]
 16:         WeblogUpdatesReply Ping(string weblogName, string weblogUrl);
 17:     }

The code is subject to the Microsoft samples license, which means that you can freely put it into your (blogging) apps as long as you keep the house out of trouble.

Categories: .NET Services | WCF

We've got a discussion forum up on MSDN where you can ask questions about Microsoft .NET Services (Service Bus, Workflow, Access Control): http://social.msdn.microsoft.com/Forums/en-US/netservices/threads/

 

Categories: Talks | Technology | ISB | WCF

October 28, 2008
@ 04:56 AM

According to recent traffic studies, the BitTorrent protocol is now responsible for roughly half of all Internet traffic. That's a lot of sharing of personal photos, self-sung songs, and home videos. Half! Next to text messaging, Instant Messaging applications are the social lifeline for our teenagers these days – so much that the text messaging and IM lingo is starting to become a natural part of the colloquial vocabulary everywhere. Apple's TV, Microsoft's Xbox 360, and Netflix are shaking up the video rental market by delivering streamed or downloadable high-quality video and streams on YouTube have become the new window on the world. Gamers from around the world are meeting in photorealistic virtual online worlds to compete in races, rake in all the gold, or blast their respective Avatars into tiny little pieces.

What does all of that have to do with Web 2.0? Very little. While it's indisputable that the Web provides the glue between many of those experiences, the majority of all Internet traffic and very many of the most interesting Internet applications depend on bi-directional, peer-to-peer connectivity.

These familiar consumer examples have even more interesting counterparts in the business and industrial space. Industrial machinery has ever increasing remote management capabilities that allow complete remote automation, reprogramming, and reconfiguration. Security and environment surveillance systems depend on thousands of widely distributed, remotely controlled cameras and other sensors that sit on street poles, high up on building walls, or somewhere in the middle of a forest. Terrestrial and satellite-based mobile wireless technologies make it possible to provide some form of digital connectivity to almost any place on Earth, but making an array of devices addressable and reachable so that they can be integrated into and controlled by a federated, distributed business solution that can leverage Internet scale and reach remains incredibly difficult.

The primary obstacle to creating pervasive connectivity is that we have run out of IPv4 addresses. There is no mere threat of running out, we're already done. The IPv4 space is practically saturated and it's really only network address translation (NAT) that permits the Internet to grow any further. The shortage is already causing numerous ISPs to move customers behind NATs and not to provide them with public IP address leases any longer. Getting a static public IP address (let alone a range) is getting really difficult. IPv6 holds the promise of making each device (or even every general-purpose computer) uniquely addressable again, but pervasive IPv6 adoption that doesn't require the use of transitional (and constraining) tunneling protocols will still take many years.

The second major obstacle is security. Since the open network is a fairly dangerous place these days and corporate network environments are often und unfortunately not much better, the use of Firewalls has become ubiquitous and almost all incoming traffic is blocked by default on the majority of computers these days. That's great for keeping the bad guys out, but not so great for everything else – especially not for applications requiring bi-directional connectivity between peers.

Since these constraints are obviously well-known and understood there is a range of workarounds. In home networking environments the firewall and NAT issues are often dealt with by selectively allowing applications to open inbound ports on the local and network router firewalls using technologies like UPnP or by opening and forwarding port by ways of manual configuration. Dynamic DNS services help with making particular machines discoverable even if the assigned IP address keeps changing. The problem with those workarounds is that they realistically only ever work for the simplest home networking scenarios and, if they do work, the resulting security threat situation is quite scary. The reality is that the broadly deployed Internet infrastructure is optimized for the Web: clients make outbound requests, publicly discoverable and reachable servers respond.

If your application requires bi-directional connectivity you effectively have two choices: Either you bet on the available workarounds and live with the consequences (as BitTorrent does) or you build and operate some form of Relay service for your application. A Relay service accepts and maintains connections from firewalled and/or NAT-ed clients and routes messages between them. Practically all chat, instant messaging, video conferencing, VoIP, and multiplayer gaming applications and many other popular Internet applications depend on some form of Relay service.

The challenge with Relay services is that they are incredibly hard to build in a fashion that they can provide Internet scale where they need to route between thousands or even millions of connections as the large Instant Messaging networks do. And once you have a Relay that can support such scale it is incredibly expensive to operate. So expensive in fact that the required investments and the resulting operational costs are entirely out of reach for the vast majority of software companies. The connectivity challenge is a real innovation blocker and represents a significant entry barrier.

The good news is that Microsoft .NET Service Bus provides a range of bidirectional, peer-to-peer connectivity options including relayed communication. You don't have to build your own or run your own; you can use this Building Block instead. The .NET Service Bus covers four logical feature areas: Naming, Registry, Connectivity, and Eventing.

Naming

The Internet's Domain Name System (DNS) is a naming system primarily optimized for assigning names and roles to hosts. The registration records either provide a simple association of names and IP addresses or a more granular association of particular protocol roles (such as identifying domain's mail server) with an IP address. In either case, the resolution of the DNS model occurs at the IP address level and that is very coarse grained. Since it is IP address centric, a DNS registration requires a public IP address. Systems behind NAT can't play. Even though Dynamic DNS services can provide names to systems that do have a public IP address, relying on DNS means for most ISP customers that the entire business site or home is identified by a single DNS host entry with dozens or hundreds of hosts sitting behind the NAT device.

If you want to uniquely name individual hosts behind NATs, differentiate between individual services on hosts, or want to name services based on host-independent criteria such as the name of a user or tenant, the DNS system isn't an ideal fit.

The .NET Service Bus Naming system is a forest of (theoretically) infinite-depth, federated naming trees. The Naming system maintains an independent naming tree for each tenant's solution scope and it's up to the application how it wants to shape its tree. 'Solution' is a broad term in this context meant to describe a .NET Service Bus tenant – on the customer side, a Service Bus application scope may map to dozens of different on-site applications and hundreds of application instances.

Any path through the naming tree has a projection that directly maps to a URI.

Let's construct an example to illustrate this: You design a logistics system for a trucking company where you need to route information to service instances at particular sites. The application scope is owned by your client, 'ContosoTrucks' which has a number of logistics centers where they want to deploy the application. Your application is called 'Shipping' and the endpoints through which the shipping orders are received at the individual sites are named 'OrderManagement'. The canonical URI projection of the mapping of New York's order management application endpoint instance into the ServiceBus Naming system is
http://servicebus.windows.net/services/contoso/NewYork/Shipping/OrderManagement/

The significant difference from DNS naming is that the identification of services and endpoints moves from the host portion of the URI to the path portion and becomes entirely host-agnostic. The DNS name identifies the scope and the entry point for accessing the naming tree. That also means that the path portion of the URI represent a potentially broadly distributed federation of services in the Naming service, while the path portion of a 'normal' URI typically designates a collocated set of resources.

There is no immediate access API for the Naming system itself. Instead, access to the Naming system is provided through the overlaid Service Registry.

Service Registry

The Service Registry allows publishing service endpoint references (URIs or WS-Addressing EPRs) into the Naming system and to discover services that have been registered.

The primary access mechanism for the Service Registry is based on the Atom Publishing Protocol (APP) allowing clients to publish URIs or EPRs by sending a simple HTTP PUT request with an Atom 1.0 'item' to any name in the naming tree. It's removed by sending an HTTP DELETE request to the same name. There is no need to explicitly manage names – names are automatically created and deleted as you create or delete service registry entries.

Service discovery is done by navigating the naming hierarchy, which is accessible through a nested tree of Atom 1.0 feeds whose master-feed is located at http://servicebus.windows.net/services/[solution]/. Any publicly registered service is accessible through the feed at the respective location.

In addition to the Atom Publishing Protocol, the Service Registry also supports publishing, accessing, and removing endpoint references using WS-Transfer and the Relay service will automatically manage its endpoints in the Service Registry without requiring any additional steps.

The Service Registry is an area that will see quite significant further additions over the next few milestones including support for service categorization, search across the hierarchy, and support for additional high-fidelity discovery protocols.

Connectivity

The core of the connectivity feature area of the .NET Service Bus is a scalable, general-purpose Relay service. The Relay's communication fabric supports unicast and multicast datagram distribution, connection-oriented bi-directional socket communication and request-response messaging.

Towards listening services the Relay takes on the same role as operating-system provided listeners such as Windows' HTTP.SYS. Instead of listening for HTTP requests locally, a relayed HTTP service establishes an HTTP listener endpoint inside the cloud-based Relay and clients send requests to that cloud-based listener from where they are forwarded to the listening service.

The connection between the listener and the Relay is always initiated from the listener side. In most connection modes (there are some exceptions that we'll get to) the listener initiates a secured outbound TCP socket connection into the Relay, authenticates, and then tells the Relay at which place in the naming tree it wants to start listening and what type of listener should be established.

Since a number of tightly managed networking environments block outbound socket connections and only permit outbound HTTP traffic, the socket based listeners are complemented by an HTTP-based multiplexing polling mechanism that builds on a cloud-based message buffer. In the PDC release the HTTP-based listeners only support the unicast and multicast datagram communication, but bidirectional connectivity is quite easily achievable by pairing two unicast connections with mutually reversed client and listener roles.

A special variation of the bi-directional socket communication mode is 'Direct Connect'. The 'Direct Connect' NAT traversal technology is capable of negotiating direct end-to-end socket connections between arbitrary endpoints even if both endpoints are located behind NAT devices and Firewalls. Using Direct Connect you can start connections through the Relay and 'Direct Connect' will negotiate the most direct possible connectivity route between the two parties and once the route is established the connection will be upgraded to the direct connection – without information loss.

With these connectivity options, the Relay can provide public, bi-directional connectivity to mostly any service irrespective of whether the hosting machine is located behind a NAT or whether the Firewalls layered up towards the public network don't allow inbound traffic. The automatic mapping into the Naming system means that the service also gains a public address and the service can, on demand, be automatically published into the Service Registry to make the service discoverable.

In addition to providing NAT and Firewall traversal and discoverability the delegation of the public network endpoint into the Relay provides a service with a number of additional key advantages that are beneficial even if NAT traversal or discoverability are not a problem you need to solve:

  • The Relay functions as a "demilitarized zone" that is isolated from the service's environment and takes on all external network traffic, filtering out unwanted traffic.
  • The Relay anonymizes the listener and therefore effectively hides all details about the network location of the listener thus reducing the potential attack surface of the listening service to a minimum.
  • The Relay is integrated with the Access Control Service and can require clients to authenticate and be authorized at the Relay before they can connect through to the listening service. This authorization gate is enabled by default for all connections and can be selectively turned off if the application wants to perform its own authentication and authorization.

These points are important to consider in case you are worried about the fact that the Relay service provides Firewall traversal. Firewalls are a means to prevent undesired foreign access to networked resources – the Relay provides a very similar function but does so on an endpoint-by-endpoint basis and provides an authentication and authorization mechanism on the network path as well.

If your applications are already built on the .NET Framework and your services are built using the Windows Communication Foundation (WCF) it's often just a matter of changing your application's configuration settings to have your services listen on the Relay instead on the local machine.

The Microsoft.ServiceBus client framework provides a set of WCF bindings that are very closely aligned with the WCF bindings available in the .NET Framework 3.5. If you are using the NetTcpBinding in your application you switch to the NetTcpRelayBinding, the BasicHttpBinding maps to the BasicHttpRelayBinding, and the WebHttpBinding has its equivalent in the WebHttpRelayBinding. The key difference between the standards WCF bindings and their Relay counterparts is that they establish a listener in the cloud instead of listening locally.

All WS-Security and WS-ReliableMessaging scenarios that are supported by the standard bindings are fully supported through the Relay. Transport-level message protection using HTTPS or SSL-protected TCP connections is supported as well.

If the listener chooses to rely on WS-Security to perform its own authentication and authorization instead of using the security gate built into the Relay, the HTTP-based Relay bindings' policy projection is indeed identical to their respective standard binding counterparts which means that client components can readily use the standard .NET Framework 3.5 bindings (and other WS-* stacks such as Sun Microsystems' Metro Extensions for the Java JAX-WS framework).

If you prefer RESTful services over SOAP services, you can build them on the WebHttpRelayBinding using the WCF Web programming model introduced in the .NET Framework 3.5. The Relay knows how to route SOAP 1.1, SOAP 1.2 messages and arbitrary HTTP requests transparently.

The NetEventRelayBinding doesn't have an exact counterpart in the standard bindings. This binding provides access to the multicast publish/subscribe capability in the Relay. Using this binding, clients act as event publishers and listeners act as subscribers. An event-topic is represented by an agreed-upon name in the naming system. There can be any number of publishers and any number of subscribers that use the respective named rendezvous point in the Relay. Listeners can subscribe independent of whether a publisher currently maintains an open connection and publishers can publish messages irrespective of how many listeners are currently active – including zero. The result is a very easy to use lightweight one-way publish/subscribe event distribution mechanism that doesn't require any particular setup or management.

The discussion of the close alignment between the Relay's .NET programming experience and the standard .NET Framework shouldn't imply that the Relay requires the use of the .NET Framework. Microsoft is working with community partners to provide immediate and native Relay support for the Java and Ruby platforms of which initial releases will be available at or shortly after PDC with more language and platform support lined up in the pipeline.

The Relay provides connectivity options that allow you build bidirectional communication links for peer-to-peer communication, allows making select endpoints securely and publicly reachable without having to open up the Firewall floodgates, and provides a cloud-based pub/sub event bus that permits your application to distribute events at Internet scale. I could start enumerating scenarios at this point, but it seems like a safe bet that you can already think of some.

Find out more here:
http://www.microsoft.com/azure/default.mspx
http://www.microsoft.com/azure/servicebus.mspx

 

Categories: Talks | WCF

April 3, 2008
@ 06:10 AM

Earlier today I hopefully gave a somewhat reasonable, simple answer to the question "What is a Claim?" Let's try the same with "Token":

In the WS-* security world, "Token" is really just a another name the security geniuses decided to use for "Handy package for all sorts of security stuff". The most popular type of token is the SAML (just say "samel") token. If the ladies and gentlemen designing and writing security platform infrastructure and frameworks are doing a good job you might want to know about the existence of such a thing, but otherwise be blissfully ignorant of all the gory details.

Tokens are meant to be a thing that you need to know about in much the same way you need to know about ... ummm... rebate coupons you can cut out of your local newspaper or all those funny books that you get in the mail. I have really no idea how the accounting works behind the scenes between the manufacturers and the stores, but it really doesn't interest me much, either. What matters to me is that we get $4 off that jumbo pack of diapers and we go through a lot of those these days with a 9 month old baby here at home. We cut out the coupon, present it at the store, four bucks saved. Works for me.

A token is the same kind of deal. You go to some (security) service, get a token, and present that token to some other service. The other service takes a good look at the token and figures whether it 'trusts' the token issuer and might then do some further inspection; if all is well you get four bucks off. Or you get to do the thing you want to do at the service. The latter is more likely, but I liked the idea for a moment.

Remember when I mentioned the surprising fact that people lie from time to time when I wrote about claims? Well, that's where tokens come in. The security stuff in a token is there to keep people honest and to make 'assertions' about claims. The security dudes and dudettes will say "Err, that's not the whole story", but for me it's good enough. It's actually pretty common (that'll be their objection) that there are tokens that don't carry any claims and where the security service effectively says "whoever brings this token is a fine person; they are ok to get in". It's like having a really close buddy relationship with the boss of the nightclub when you are having troubles with the monsters guarding the door. I'm getting a bit ahead of myself here, though.

In the post about claims I claimed that "I am authorized to approve corporate acquisitions with a transaction volume of up to $5Bln". That's a pretty obvious lie. If there was such a thing as a one-click shopping button for companies on some Microsoft Intranet site (there isn't, don't get any ideas) and I were to push it, I surely should not be authorized to execute the transaction. The imaginary "just one click and you own Xigg" button would surely have some sort of authorization mechanism on it.

I don't know what Xigg is assumed to be worth these days, but there is actually be a second authorization gate to check. I might indeed be authorized to do one-click shopping for corporate acquisitions, but even with my made-up $5Bln limit claim, Xigg may just be worth more that I'm claiming I'm authorized to approve. I digress.

How would the one-click-merger-approval service be secured? It would expect some sort of token that absolutely, positively asserts that my claim "I am authorized to approve corporate acquisitions with a transaction volume of up to $5Bln" is truthful and the one-click-merger-approval service would have to absolutely trust the security service that is making that assertion. The resulting token that I'm getting from the security service would contain the claim as an attribute of the assertion and that assertion would be signed and encrypted in mysterious (for me) yet very secure and interoperable ways, so that I can't tamper with it as much as I look at the token while having it in hands.

The service receiving the token is the only one able to crack the token (I'll get to that point in a later post) and look at its internals and the asserted attributes. So what if I were indeed authorized to spend a bit of Microsoft's reserves and I were trying to acquire Xigg at the touch of a button and, for some reason I wouldn't understand, the valuation were outside my acquisition limit? That's the service's job. It'd look at my claim, understand that I can't spend more than $5Bln and say "nope!" - and it would likely send email to SteveB under the covers. Trouble.

Bottom line: For a client application, a token is a collection of opaque (and mysterious) security stuff. The token may contain an assertion (saying "yep, that's actually true") about a claim or a set of claims that I am making. I shouldn't have to care about the further details unless I'm writing a service and I'm interested in some deeper inspection of the claims that have been asserted. I will get to that.

Before that, I notice that I talked quite a bit about some sort of "security service" here. Next post...

Categories: Architecture | SOA | CardSpace | WCF | Web Services

April 2, 2008
@ 08:20 PM

If you ask any search engine "What is a Claim?" and you mean the sort of claim used in the WS-* security space, you'll likely find an answer somewhere, but that answer is just as likely buried in a sea of complex terminology that is only really comprehensible if you have already wrapped your head around the details of the WS-* security model. I would have thought that by now there would be a simple and not too technical explanation of the concept that's easy to find on the Web, but I haven't really had success finding one. 

So "What is a Claim?" It's really simple.

A claim is just a simple statement like "I am Clemens Vasters", or "I am over 21 years of age", or "I am a Microsoft employee", or "I work in the Connected Systems Division", or "I am authorized to approve corporate acquisitions with a transaction volume of up to $5Bln". A claim set is just a bundle of such claims.

When I walk up to a service with some client program and want to do something on the service that requires authorization, the client program sends a claim set along with the request. For the client to know what claims to send along, the service lets it know about its requirements in its policy.

When a request comes in, this imaginary (U.S.) service looks at the request knowing "I'm a service for an online game  promoting alcoholic beverages!". It then it looks at the claim set, finds the "I am over 21 years of age" claim and thinks "Alright, I think we got that covered".

The service didn't really care who was trying to get at the service. And it shouldn't. To cover the liquor company's legal behind, they only need to know that you are over 21. They don't really need to know (and you probably don't want them to know) who is talking to them. From the client's perspective that's a good thing, because the client is now in a position to refuse giving out (m)any clues about the user's identity and only provide the exact data needed to pass the authorization gate. Mind that the claim isn't the date of birth for that exact reason. The claim just says "over 21".

Providing control over what claims are being sent to a service (I'm lumping websites, SOAP, and REST services all in the same bucket here) is one of the key reasons why Windows CardSpace exists, by the way. The service asks for a set of claims, you get to see what is being asked for, and it's ultimately your personal, interactive decision to provide or refuse to provide that information.

The only problem with relying on simple statements (claims) of that sort is that people lie. When you go to the Jack Daniel's website, you are asked to enter your date of birth before you can proceed. In reality, it's any date you like and an 10-year old kid is easily smart enough to figure that out.

All that complex security stuff is mostly there to keep people honest. Next time ...

Categories: Architecture | SOA | CardSpace | WCF | Web Services

A flock of pigs has been doing aerobatics high up over Microsoft Campus in Redmond in the past three weeks. Neither City of Redmond nor Microsoft spokespeople returned calls requesting comments in time for this article. An Microsoft worker who requested anonymity and has seen the pigs flying overhead commented that "they are as good as the Blue Angels at Seafair, just funnier" and "they seem to circle over building 42 a lot, but I wouldn't know why".

In related news ...

We wrapped up the BizTalk Services "R11" CTP this last Thursday and put the latest SDK release up on http://labs.biztalk.net/. As you may or may not know, "BizTalk Services" is the codename for Microsoft's cloud-based Identity and Connectivity services - with a significant set of further services in the pipeline. The R11 release is a major milestone for the data center side of BizTalk Services, but we've also added several new client-facing features, especially on the Identity services. You can now authenticate using a certificate in addition to username and CardSpace authentication, we have enabled support for 3rd party managed CardSpace cards, and there is extended support for claims based authorization.

Now the surprising bit:

Only about an hour before we locked down the SDK on Thursday, we checked a sample into the samples tree that has a rather unusual set of prerequisites for something coming out of Microsoft:

Runtime: Java EE 5 on Sun Glassfish v2 + Sun WSIT/Metro (JAX-WS extensions), Tool: Netbeans 6.0 IDE.

The sample shows how to use the BizTalk Services Identity Security Token Service (STS) to secure the communication between a Java client and a Java service providing federated authentication and claims-based authorization.

The sample, which you can find in ./Samples/OtherPlatforms/StandaloneAccessControl/JavaEE5 once you installed the SDK, is a pure Java sample not requiring any of our bits on either the service or client side. The interaction with our services is purely happening on the wire.

If you are a "Javahead", it might seem odd that we're shipping this sample inside a Windows-only MSI installer and I will agree that that's odd. It's simply a function of timing and the point in time when we knew that we could get it done (some more on that below). For the next BizTalk Services SDK release I expect there to be an additional .jar file for the Java samples.

It's important to note that this isn't just a thing we did as a one-time thing and because we could. We have done a significant amount of work on the backend protocol implementations to start opening up a very broad set of scenarios on the BizTalk Services Connectivity services for platforms other than .NET. We already have a set of additional Java EE samples lined up for when we enable that functionality on the backend. However, since getting security and identity working is a prerequisite for making all other services work, that's where we started. There'll be more and there'll be more platform and language choice than Java down the road.

Just to be perfectly clear: Around here we strongly believe that .NET and the Windows Communication Foundation in particular is the most advanced platform to build services, irrespective of whether they are of the WS-* or REST variety. If you care about my personal opinion, I'll say that several months of research into the capabilities of other platforms has only reaffirmed that belief for me and I don't even need to put a Microsoft hat on to say that.

But we recognize and respect that there are a great variety of individual reasons why people might not be using .NET and WCF. The obvious one is "platform". If you run on Linux or Unix and/or if your deployment target is a Java Application Server, then your platform is very likely not .NET. It's something else. If that's your world, we still think that our services are something that's useful for your applications and we want to show you why. And it is absolutely not enough for us to say "here is the wire protocol documentation; go party!". Only Code is Truth.

I'm also writing "Only Code is Truth" also because we've found - perhaps not too surprisingly - that there is a significant difference between reading and implementing the WS-* specs and having things actually work. And here I get to the point where a round of public "Thank You" is due:

The Metro team over at Sun Microsystems has made a very significant contribution to making this all work. Before we started making changes to accommodate Java, there would have been very little hope for anyone to get this seemingly simple scenario to work. We had to make quite a few changes even though our service did follow the specs.

While we were adjusting our backend STS accordingly, the Sun Metro team worked on a set of issues that we identified on their end (with fantastic turnaround times) and worked those into their public nightly builds. The Sun team also 'promoted' a nightly build of Metro 1.2 to a semi-permanent download location (the first 1.2 build that got that treatment), because it is the build tested to successfully interop with our SDK release, even though that build is known to have some regressions for some of their other test scenarios. As they work towards wrapping up their 1.2 release and fix those other bugs, we’ll continue to test and talk to help that the interop scenarios keep working.

As a result of this collaboration, Metro 1.2 is going to be a better and more interoperable release for the Sun's customers and the greater Java community and BizTalk Services as well as our future identity products will be better and more interoperable, too. Win-Win. Thank you, Sun.

As a goodie, I put some code into the Java sample that might be useful even if you don't even care about our services. Since configuring the Java certificate stores for standalone applications can be really painful, I added some simple code that's using a week-old feature of the latest Metro 1.2 bits that allows configuring the Truststores/Keystores dynamically and pull the stores from the client's .jar at runtime. The code also has an authorization utility class that shows how to get and evaluate claims on the service side by pulling the SAML token out of the context and pulling the correct attributes from the token.

Have fun.

[By the way, this is not an April Fool's joke, in case you were wondering]

Categories: Architecture | IT Strategy | Technology | CardSpace | ISB | WCF

We're all sinners. Lots of the authentication mechanisms on the Web are not even "best effort", but rather just cleartext transmissions of usernames and passwords that are easily intercepted and not secure at all. We're security sinners by using them and even more so by allowing this. However, the reality is that there's very likely more authentication on the Web done in an insecure fashion and in cleartext than using any other mechanism. So if you are building WCF apps and you decide "that's good enough" what to do?

WCF is - rightfully - taking a pretty hard stance on these matters. If you try to use any of the more advanced in-message authN and authZ mechnanisms such as the integration with the ASP.NET membership/role provider models, you'll find yourself in security territory and our security designers took very good care that you are not creating a config that results in the cleartext transmission of credentials. And for that you'll need certificates and you'll also find that it requires full trust (even in 3.5) to use that level of robust on-wire security.

dasBlog has (we're sinners, too) a stance on authentication that's about as lax as everyone else's stance in blog-land. There are not many MetaWeblog API endpoints running over https (as they rather should) that I've seen. 

So what I need for a bare minimum dasBlog install where the user isn't willing to get an https certificate for their site is a very simple, consciously insecure, bare-bones authentication and authorization mechanism for WCF services that uses the ASP.NET membership/role model (dasBlog will use that model as we switch to the .NET Framework 3.5 later this year). The It also needs to get completely out of the way when the service is configured with any real AuthN/AuthZ mechanism.

So here's a behavior (some C# 3.0 syntax, but easy to fix) that you can add to channel factories (client) and service endpoints (server) that will do just that. If you care about confidentiality of credentials on the wire don't use it. For this to work, you need to put the behavior on both ends. The behavior will do nothing (as intended) when the binding isn't the BasicHttpBinding with BasicHttpSecurityMode.None). The header will not show up in WSDL.

On the client, you simply add the behavior and otherwise set the credentials as you would usually do for UserName authentication. This makes sure that the client code stays compatible when you upgrade the wire protocol to a more secure (yet still username-based) binding via config.

MyClient remoteService = new MyClient();
remoteService.ChannelFactory.Endpoint.Behaviors.Add(new SimpleAuthenticationBehavior());
remoteService.ClientCredentials.UserName.UserName = "admin";
remoteService.ClientCredentials.UserName.Password = "!adminadmin";

On the server, you just configure your ASP.NET membership and role database. With that in place, you can even use role-based security attributes or any other authorization mechnanism you are accustomed to in ASP.NET. Just as on the client, the behavior goes out of the way and gives way for the "real thing" once you turn on security.

using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Description;
using System.ServiceModel.Dispatcher;
using System.ServiceModel.Security;
using System.Threading;
using System.Web.Security;
using System.Xml.Serialization;

namespace dasBlog.Storage
{
    [
DataContract(Namespace = Names.DataContractNamespace)]
    class SimpleAuthenticationHeader
    {
        [
DataMember]
       
public string UserName;
        [
DataMember]
       
public string Password;
    }

   
public class SimpleAuthenticationBehavior : IEndpointBehavior
    {
        #region IEndpointBehavior Members

       
public void AddBindingParameters(ServiceEndpoint endpoint, 
                                        
BindingParameterCollection bindingParameters)
        {
           
        }

       
public void ApplyClientBehavior(ServiceEndpoint endpoint, 
                                       
ClientRuntime clientRuntime)
        {
           
if (endpoint.Binding is BasicHttpBinding &&
                ((
BasicHttpBinding)endpoint.Binding).Security.Mode == BasicHttpSecurityMode.None )
            {
               
var credentials = endpoint.Behaviors.Find<ClientCredentials>();
               
if (credentials != null && credentials.UserName != null && credentials.UserName.UserName != null)
                {
                    clientRuntime.MessageInspectors.Add(
new ClientMessageInspector(credentials.UserName));                   
                }
            }
        }

       
public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher)
        {
           
if (endpoint.Binding is BasicHttpBinding &&
                ((
BasicHttpBinding)endpoint.Binding).Security.Mode == BasicHttpSecurityMode.None)
            {
                endpointDispatcher.DispatchRuntime.MessageInspectors.Add(
new DispatchMessageInspector());
            }
        }

       
public void Validate(ServiceEndpoint endpoint)
        {
           
        }

        #endregion

        class DispatchMessageInspector : IDispatchMessageInspector
        {
            #region IDispatchMessageInspector Members

           
public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext)
            {
               
int headerIndex = request.Headers.FindHeader("simpleAuthenticationHeader", "http://dasblog.info/2007/08/security");
               
if (headerIndex >= 0)
                {
                   
var header = request.Headers.GetHeader<SimpleAuthenticationHeader>(headerIndex);
                    request.Headers.RemoveAt(headerIndex);
                   
if ( Membership.ValidateUser(header.UserName, header.Password) )
                    {
                       
var identity = new FormsIdentity(new FormsAuthenticationTicket(header.UserName, false, 15));
                       
Thread.CurrentPrincipal = new RolePrincipal(identity);
                    }
                }
               
return null;
            }

           
public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
            {
               
            }

            #endregion
        }

       
class ClientMessageInspector : IClientMessageInspector
        {
            #region IClientMessageInspector Members

           
UserNamePasswordClientCredential creds;

           
public ClientMessageInspector(UserNamePasswordClientCredential creds)
            {
               
this.creds = creds;
            }

           
public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
            {
               
            }

           
public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, IClientChannel channel)
            {
                request.Headers.Add(
MessageHeader.CreateHeader("simpleAuthenticationHeader", http://dasblog.info/2007/08/security,
                                    new SimpleAuthenticationHeader{ UserName = creds.UserName, Password = creds.Password }));
                
return null;
            }

            #endregion
        }
    }
}

Categories: Indigo | WCF

August 21, 2007
@ 07:46 AM

UPDATE: The code has been updated. Ignore this post and go here.

I'm writing lots of code lately. I've rejoined the dasBlog community and I'm busy writing a prototype for the .NET Framework 3.5 version of dasBlog (we just released the 2.0 version, see http://www.dasblog.info/).

One of the goals of the prototype, which we'll eventually merge into the main codebase once the .NET Framework 3.5 is available at hosting sites is to standardize on WCF for all non-HTML endpoints. Since lots of the relevant inter-blog and blogging tool APIs are still based on XML-RPC, that called for an implementation of XML-RPC on WCF. I've just isolated that code and put it up on wcf.netfx3.com.

My XML-RPC implementation is a binding with a special encoder and a set of behaviors. The Service Model programming experience is completely "normal" with no special extension attributes. That means you can also expose the XML-RPC contracts as SOAP endpoints with all the advanced WCF bindings and features if you like.

The binding supports client and service side and is completely config enabled. Here's a snippet from the MetaWeblog contract:

[ServiceContract(Namespace = http://www.xmlrpc.com/metaWeblogApi)]
public interface IMetaWeblog : Microsoft.ServiceModel.Samples.XmlRpc.Contracts.Blogger.
IBlogger
{
   [OperationContract(Action="metaWeblog.editPost")]
   bool metaweblog_editPost(string postid,
                             string username,
                             string password,
                             Post post,
                             bool publish);

   [OperationContract(Action="metaWeblog.getCategories")]
   CategoryInfo[] metaweblog_getCategories( string blogid,
                                            string username,
                                            string password);
    ...
}

For your convenience I've included complete Blogger, MetaWeblog, and MovableType API contracts along with the respective data types in the test application. The test app is a small in-memory blog that you can use with the blogging function of Word 2007 as a client or some other blogging client for testing.

Of the other interesting XML-RPC APIs, the Pingback API has the following contract:

    [ServiceContract(Namespace="http://www.hixie.ch/specs/pingback/pingback")]
   
public interface
IPingback
    {
        [
OperationContract(Action="pingback.ping"
)]
       
string ping(string sourceUri, string
targetUri);
    }

and the WeblogUpdates API looks like this:

    [DataContract]
   
public struct
WeblogUpdatesReply
    {
        [
DataMember
]
       
public bool
flerror;
        [
DataMember
]
       
public string
message;
    }

    [
ServiceContract
]
   
public interface
IWeblogUpdates
    {
        [
OperationContract(Action = "weblogUpdates.extendedPing"
)]
       
WeblogUpdatesReply ExtendedPing(string weblogName, string weblogUrl, string checkUrl, string
rssUrl);
        [
OperationContract(Action="weblogUpdates.ping"
)]
       
WeblogUpdatesReply Ping(string weblogName, string
weblogUrl);
    }

I'm expecting some interop bugs since I've done a clean implementation from the specs, so if you find any please let me know.

The code is subject to the Microsoft samples license, which means that you can put it into your (blogging) apps. Enjoy.

Categories: MSDN | Indigo | WCF | Weblogs

Having an Internet Service Bus up in the cloud is not very entertaining unless there are services in the bus. Therefore, I built one (and already showed some of the code basics) that’s hopefully fun to play with and will soon share the first version with you after some scrubbing and pending a few updates to the ISB that will optimize the authentication process. It’s a 0.1 version and an experiment. The code download should be ready in the next two weeks, including those adjustments. But you can actually play with parts of it today without compiling or installing anything. The info is at the bottom of this post.

To make matters really interesting, this sample not only shows how to plug a service into the cloud and call it from some Console app, but is a combo of two rather unusual hosts for WCF services: A Windows Live Messenger Add-In that acts as the server, and a Windows Vista Sidebar gadget that acts as the client.

Since the Silicon Valley scene is currently all over Twitter and clones of Twitter are apparently popping up somewhere every day, I thought I could easily provide fodder to the proponents of the alleged Microsoft tradition of purely relying on copying other’s ideas and clone them as well ;-)  Well, no, maybe not. This is a bit different.

TweetieBot is an example of a simple personal service. If you choose to host it, you own it, you run it, you control it. The data is held nowhere but on your personal machine and it’s using the BizTalk Services ISB to stick its head up into the cloud and at a stable endpoint so that its easily reachable for a circle of friends, bridging the common obstacles of dynamic IPs, firewalls and NAT. No need to use UPnP or open up ports on your router. If you choose to do so, you can encrypt traffic so that there’s no chance that anyone looking at our ISB nor anyone else can see the what’s actually going across the wire.

Right now, lots of the Web 2.0 world lives on the assumption that everything needs to live at central places and that community forms around ad-driven hubs. The mainframe folks had a similar stance in the 70s and 80s and then Personal Computers came along. The pendulum is always swinging and I have little doubt that it will swing back to “personal” once more and that the federation of personal services will seriously challenge the hub model once more.

So what does the sample do? As indicated, TweetieBot is a bot that plugs into a Windows Live Messenger using a simple Add-In. Bart De Smet has a brilliant summary for how to build such Add-Ins. When the Add-In is active and someone chats the bot, it answers politely and remembers the chat line, time and sender. The bird has a leaky long term memory, though. It forgets everything past the last 40 lines.

Where it gets interesting is that the Add-In can stick three endpoints into the BizTalk Services ISB:

  • A Request/Response Web Service that allows retrieving the list of the last 40 (or less) “tweets” and also allows client to submit tweets programmatically.
  • An RSS service that allows (right now) anyone to peek in to the chat log of the last 40 tweets.
  • An Event service that allows subscribers to get real-time notifications whenever a new tweet is recorded.

The accompanying Sidebar Gadget, which is implemented using WPF, is a client for two of these services.

 When you drop the Gadget on the Sidebar, it will prompt for the IM address of the TweetieBot service you’d like to subscribe to. Once you’ve authenticated at the relay using your registered Information Card, the gadget will pull and show the current list of Tweets and subscribe to the Events service for real-time updates. And whenever someone chats the bot, the Sidebar gadget will immediately show the new entry. So even though the Gadget lives on some client machine that’s hidden between several layers of firewalls and behind NAT, it can actually get push-style event notifications through the cloud!

“How do I send events to clients?” must be one of the most frequent questions that I’ve been asked about Web Services in the past several years. Well, this is your answer right here.

While I’m still toying around with the code and the guys on the 1st floor in my building are doing some tweaks on the ISB infrastructure to make multi-endpoint authentication simpler, you can already play with the bot and help me a bit:

Using Windows Live Messenger you can chat (click here) tweetiebot@hotmail.com now. Drop a few lines. If the bot is online (which means that I’m not tinkering with it) it will reply. Then look at this RSS feed [1] and you can see what you and everyone else have been telling the bot recently. Enjoy.

[1] http://connect.biztalk.net/services/tweetiebot/tweetiebot%40hotmail.com/rss

Categories: Technology | BizTalk | ISB | WCF

We love WS-* as much as we do love Web-Style services. I say "Web-style", full knowing that the buzzterm is REST. Since REST is an architectural style and not an implementation technology, it makes sense to make a distinction and, also, claiming complete RESTfulness for a system is actually a pretty high bar to aspire to. So in order to avoid monikers like POX or Lo-REST/Hi-REST, I just call it what it what this is all about to mere mortals whose don't have an advanced degree in HTTP Philosophy: Services that work like the Web - or Web-Style. That's not to say that a Web-Style service cannot be fully RESTful. It surely can be. But if all you want to do is GET to serve up data into mashups and manipulate your backend resources in some other way, that's up to you. Anyways....

Tomorrow at 10:00am (Session DEV03, Room Delfino 4101A), our resident Lo-REST/Hi-REST/POX/Web-Style Program Manager Steve Maine and our Architect Don Box will explain to you how to use the new Web-Style "Programmable Web" features that we're adding to the .NET Framework 3.5 to implement the server magic and the service-client magic to power all the user experience goodness you've seen here at MIX.

Navigating the Programmable Web
Speaker(s): Don Box - Microsoft, Steve Maine
Audience(s): Developer
RSS. ATOM. JSON. POX. REST. WS-*. What are all these terms, and how do they impact the daily life of a developer trying to navigate today’s programmable Web? Join us as we explore how to consume and create Web services using a variety of different formats and protocols. Using popular services (Flickr, GData, and Amazon S3) as case studies, we look at what it takes to program against these services using the Microsoft platform today and how that will change in the future.
If you are in Vegas for MIX, come see the session. I just saw the demo, it'll be good.
Categories: Talks | Technology | WCF | Web Services

Christian Weyer shows off the few lines of pretty straightforward WCF code & config he needed to figure out in order to set up a duplex conversation through BizTalk Services.

Categories: Architecture | SOA | BizTalk | WCF | Web Services | XML

Steve has a great analysis of what BizTalk Services means for Corzen and how he views it in the broader industry context.

Categories: Architecture | SOA | IT Strategy | Technology | BizTalk | WCF | Web Services

April 25, 2007
@ 03:28 AM

"ESB" (for "Enterprise Service Bus") is an acronym floating around in the SOA/BPM space for quite a while now. The notion is that you have a set of shared services in an enterprise that act as a shared foundation for discovering, connecting and federating services. That's a good thing and there's not much of a debate about the usefulness, except whether ESB is the actual term is being used to describe this service fabric or whether there's a concrete product with that name. Microsoft has, for instance, directory services, the UDDI registry, and our P2P resolution services that contribute to the discovery portion, we've got BizTalk Server as a scalable business process, integration and federation hub, we've got the Windows Communication Foundation for building service oriented applications and endpoints, we've got the Windows Workflow Foundation for building workflow-driven endpoint applications, and we have the Identity Platform with ILM/MIIS, ADFS, and CardSpace that provides the federated identity backplane.

Today, the division I work in (Connected Systems Division) has announced BizTalk Services, which John Shewchuk explains here and Dennis Pilarinos drills into here.

Two aspects that make the idea of a "service bus" generally very attractive are that the service bus enables identity federation and connectivity federation. This idea gets far more interesting and more broadly applicable when we remove the "Enterprise" constraint from ESB it and put "Internet" into its place, thus elevating it to an "Internet Services Bus", or ISB. If we look at the most popular Internet-dependent applications outside of the browser these days, like the many Instant Messaging apps, BitTorrent, Limewire, VoIP, Orb/Slingbox, Skype, Halo, Project Gotham Racing, and others, many of them depend on one or two key services must be provided for each of them: Identity Federation (or, in absence of that, a central identity service) and some sort of message relay in order to connect up two or more application instances that each sit behind firewalls - and at the very least some stable, shared rendezvous point or directory to seed P2P connections. The question "how does Messenger work?" has, from an high-level architecture perspective a simple answer: The Messenger "switchboard" acts as a message relay.

The problem gets really juicy when we look at the reality of what connecting such applications means and what an ISV (or you!) were to come up with the next cool thing on the Internet:

You'll soon find out that you will have to run a whole lot of server infrastructure and the routing of all of that traffic goes through your pipes. If your cool thing involves moving lots of large files around (let's say you'd want to build a photo sharing app like the very unfortunately deceased Microsoft Max) you'd suddenly find yourself running some significant sets of pipes (tubes?) into your basement even though your users are just passing data from one place to the next. That's a killer for lots of good ideas as this represents a significant entry barrier. Interesting stuff can get popular very, very fast these days and sometimes faster than you can say "Venture Capital".

Messenger runs such infrastructure. And the need for such infrastructure was indeed an (not entirely unexpected) important takeaway from the cited Max project. What looked just to be a very polished and cool client app to showcase all the Vista and NETFX 3.0 goodness was just the tip of a significant iceberg of (just as cool) server functionality that was running in a Microsoft data center to make the sharing experience as seamless and easy as it was. Once you want to do cool stuff that goes beyond the request/response browser thing, you easily end up running a data center. And people will quickly think that your application sucks if that data center doesn't "just work". And that translates into several "nines" in terms of availability in my book. And that'll cost you.

As cool as Flickr and YouTube are, I don't think of none of them or their brethren to be nearly as disruptive in terms of architectural paradigm shift and long-term technology impact as Napster, ICQ and Skype were as they appeared on the scene. YouTube is just a place with interesting content. ICQ changed the world of collaboration. Napster's and Skype's impact changed and is changing entire industries. The Internet is far more and has more potential than just having some shared, mashed-up places where lots of people go to consume, search and upload stuff. "Personal computing" where I'm in control of MY stuff and share between MY places from wherever I happen to be and NOT giving that data to someone else so that they can decorate my stuff with ads has a future. The pendulum will swing back. I want to be able to take a family picture with my digital camera and snap that into a digital picture frame at my dad's house at the push of a button without some "place" being in the middle of that. The picture frame just has to be able to stick its head out to a place where my camera can talk to it so that it can accept that picture and know that it's me who is sending it.

Another personal, and very concrete and real point in case: I am running, and I've written about that before, a custom-built (software/hardware) combo of two machines (one in Germany, one here in the US) that provide me and my family with full Windows Media Center embedded access to live and recorded TV along with electronic program guide data for 45+ German TV channels, Sports Pay-TV included. The work of getting the connectivity right (dynamic DNS, port mappings, firewall holes), dealing with the bandwidth constraints and shielding this against unwanted access were ridiculously complicated. This solution and IP telephony and video conferencing (over Messenger, Skype) are shrinking the distance to home to what's effectively just the inconvenience of the time difference of 9 hours and that we don't see family and friends in person all that often. Otherwise we're completely "plugged in" on what's going on at home and in Germany in general. That's an immediate and huge improvement of the quality of living for us, is enabled by the Internet, and has very little to do with "the Web", let alone "Web 2.0" - except that my Program Guide app for Media Center happens to be an AJAX app today. Using BizTalk Services would throw out a whole lot of complexity that I had to deal with myself, especially on the access control/identity and connectivity and discoverability fronts. Of course, as I've done it the hard way and it's working to a degree that my wife is very happy with it as it stands (which is the customer satisfaction metric that matters here), I'm not making changes for technology's sake until I'm attacking the next revision of this or I'll wait for one of the alternative and improving solutions (Orb is on a good path) to catch up with what I have.

But I digress. Just as much as the services that were just announced (and the ones that are lined up to follow) are a potential enabler for new Napster/ICQ/Skype type consumer space applications from innovative companies who don't have the capacity or expertise to run their own data center, they are also and just as importantly the "Small and Medium Enterprise Service Bus".

If you are an ISV catering shrink-wrapped business solutions to SMEs whose network infrastructure may be as simple as a DSL line (with dynamic IP) that goes into a (wireless) hub and is as locked down as it possibly can be by the local networking company that services them, we can do as much as we want as an industry in trying to make inter-company B2B work and expand it to SMEs; your customers just aren't playing in that game if they can't get over these basic connectivity hurdles.

Your app, that lives behind the firewall shield and NAT and a dynamic IP, doesn't have a stable, public place where it can publish its endpoints and you have no way to federate identity (and access control) unless you are doing some pretty invasive surgery on their network setup or you end up building and running run a bunch of infrastructure on-site or for them. And that's the same problem as the mentioned consumer apps have. Even more so, if you look at the list of "coming soon" services, you'll find that problems like relaying events or coordinating work with workflows are very suitable for many common use-cases in SME business applications once you imagine expanding their scope to inter-company collaboration.

So where's "Megacorp Enterprises" in that play? First of all, Megacorp isn't an island. Every Megacorp depends on lots of SME suppliers and retailers (or their equivalents in the respective lingo of the verticals). Plugging all of them directly into Megacorp's "ESB" often isn't feasible for lots of reasons and increasingly less so if the SME had a second or third (imagine that!) customer and/or supplier. 

Second, Megacorp isn't a uniform big entity. The count of "enterprise applications" running inside of Megacorp is measured in thousands rather than dozens. We're often inclined to think of SAP or Siebel when we think of enterprise applications, but the vast majority are much simpler and more scoped than that. It's not entirely ridiculous to think that some of those applications runs (gasp!) under someone's desk or in a cabinet in an extra room of a department. And it's also not entirely ridiculous to think that these applications are so vertical and special that their integration into the "ESB" gets continuously overridden by someone else's higher priorities and yet, the respective business department needs a very practical way to connect with partners now and be "connectable" even though it sits deeply inside the network thicket of Megacorp. While it is likely on every CIO's goal sheet to contain that sort of IT anarchy, it's a reality that needs answers in order to keep the business bring in the money.

Third, Megacorp needs to work with Gigacorp. To make it interesting, let's assume that Megacorp and Gigacorp don't like each other much and trust each other even less. They even compete. Yet, they've got to work on a standard and hence they need to collaborate. It turns out that this scenario is almost entirely the same as the "Panic! Our departments take IT in their own hands!" scenario described above. At most, Megacorp wants to give Gigacorp a rendezvous and identity federation point on neutral ground. So instead of letting Gigacorp on their ESB, they both hook their apps and their identity infrastructures into the ISB and let the ISB be the mediator in that play.

Bottom line: There are very many solution scenarios, of which I mentioned just a few, where "I" is a much more suitable scope than "E". Sometimes the appropriate scope is just "I", sometimes the appropriate scope is just "E". They key to achieve the agility that SOA strategies commonly promise is the ability to do the "E to I" scale-up whenever you need it in order to enable broader communication. If you need to elevate one or a set services from your ESB to Internet scope, you have the option to go and do so as appropriate and integrated with your identity infrastructure. And since this all strictly WS-* standards based, your "E" might actually be "whatever you happen to run today". BizTalk Services is the "I".

Or, in other words, this is a pretty big deal.

Categories: Architecture | SOA | IT Strategy | Microsoft | MSDN | BizTalk | WCF | Web Services

We just published a great whitepaper written by our WCF/WF Performance PM Saurabh Gupta on the relative performance of WCF compared to ASMX, WSE, Enterprise Services, and Remoting. This is material for your favorites folder. The summary says:

To summarize the results, WCF is 25%—50% faster than ASP.NET Web Services, and approximately 25% faster than .NET Remoting. Comparison with .NET Enterprise Service is load dependant, as in one case WCF is nearly 100% faster but in another scenario it is nearly 25% slower. For WSE 2.0/3.0 implementations, migrating them to WCF will obviously provide the most significant performance gains of almost 4x.

The one scenario where WCF is slower are some comparison scenarios with ES. I'd say that even getting within strinking distance of ES/COM+/DCOM/RPC performance for a V1 release that's based on Web services technology is quite an astonishing accomplishment. The ES/COM+/DCOM/RPC stack underneath had almost 15 years to get to where it's at. And the 4x should give you a really convincing reason to make the move from WSE to WCF.

Categories: WCF

April 2, 2007
@ 07:46 PM

Before I continue pointing out SDK samples, why not take a look at a great end-to-end .NET Framework 3.0 demo first? It's been out there for a while and hence this isn't really news, but in case you've not seen it (or the latest revision of it) go check out DinnerNow. The demo covers WCF, Workflow, CardSpace and PowerShell. Awesome piece of work from our Evangelism team.

Categories: WCF | Workflow | CardSpace

March 30, 2007
@ 03:44 PM

One of the "niche" features in WCF that deserves a lot more attention than it is getting is our P2P support. The NetPeerTcpBinding looks, from the developer perspective, mostly like any other binding. The main difference between P2P applications and "normal" client/server apps is, of course, that they are serverless. Hence, P2P apps are commonly based on message exchanges where every peer node in a mesh talks to everyone else in a broadcast fashion and that model favors (but doesn't require) symmetric duplex contracts*

When I say that it works like mostly any other binding, I really only mean the developer experience. The NetPeerTcpBinding packs so much network intelligence under its hood that it boggles the mind. The P2P technology underneath will figure out the optimal layout for a peer mesh, propagate messages through the mesh in an optimal fashion using members of the mesh as routers as appropriate. You can hook in filters to control the message propagation, you can control the hop counts, there are detection mechanisms for when a party gets split off the mesh and reconnects, and there are various ways to secure your meshes. And you basically get all the stuff for free if you just pick that binding and configure it.

The Peer Channel team has a blog, too. Links to samples:

(a) Basic NetPeerTcpBinding samples - Uses the PNRP resolver mode
(b) Scenario samples:
       (i) Chat - Demonstrates Chat using the non-PNRP custom resolver
       (ii) Custom Resolver - Demonstrates how to write your own Custom Resolver service and client.


* A symmetric duplex contract defines itself as the callback contract:
[ServiceContract(CallbackContract = typeof(IChat))]
public interface IChat
{
  ...
}

Categories: WCF

There are a lot of blog entries that I'd write if they weren't already written. Stupid statement. No, really. One of the great qualities of the documentation that we built for WCF and WF and CardSpace is that it's completely legible and understandable :)

Since there's just a lot of stuff in the SDK docs and one easily gets lost in the forest, I'll point out a few of the conceptual docs and/or samples and may add the one or the other commentary here or there. For the first one that I selfishly point out the only actual commentary is that I wrote that piece ;)

Go read about Message Inspectors and how to implement client- and/or server-side schema-based validation in WCF, complete with the ability to refer to the validation schemas by config. Adventure-seekers might be interested in poking around in that code and replace the schema validation and the schemas with XSLTs and transforms. That would create some interesting followup-challenges for synthesizing the ContractDescription that projects out the correct pre-transformation representation for WSDL, but I guess that'd be part of the fun.

Categories: WCF

March 29, 2007
@ 08:02 AM

A bad sign for how much I’m coding these days is that I had a HDD crash three weeks ago and only restored Visual Studio into fully working condition with all my tools and stuff today. I’ve decided that that has to change otherwise I’ll get really rusty.

Picking up the thread from “Professor Indigo” Nicholas Allen, I’ve built a little program that illustrates an alternate handling strategy for poisonous messages that WCF throws into the poison queue on Vista and Longhorn Server if you ask it to (ReceiveErrorHandling.Move). The one we’re showing in the docs is implementing a local resolution strategy that’s being fired within the service when the service ends up faulting; that’s the strategy for ReceiveErrorHandling.Fault and works for MSMQ 3.0. The strategy I’m showing here requires our latest OS wave.

When a message arrives at a WCF endpoint through a queue, WCF will – if the queue is transactional – open a transaction and de-queue the message. It will then try to dispatch it to the target service and operation. Assuming the dispatch works, the operation gets invoked and – might – tank. If it does, an exception is raised, thrown back into the WCF stack and the transaction aborts. Happily, WCF grabs the next message from the queue – which happens to be the one that just caused the failure due to the rollback – and the operation – might – tank again.

Now, the reasons why the operation might fail are as numerous as the combinations of program statement combinations that you could put there. Anything could happen. The program is completely broken, the input data causes the app to go to that branch that nobody ever cared to test – or apparently not enough, the backend database is permanently offline, the machine is having an extremely bad hardware day, power fails, you name it.

So what if the application just keeps choking and throwing on that particular message? With either of the aforementioned error handling modes, WCF is going to take the message out of the loop when its patience with the patient is exhausted. With the ReceiveErrorHandling.Fault option, WCF will raise an error event that can be caught and processed with a handler. When you use ReceiveErrorHandling.Move things are a bit more flexible, because the message causing all that trouble now sits in a queue again.

The headache-causing problem with poison messages is that you really, really need to do something about them. From the sender’s perspective, the message has been delivered and it puts its trust into the receiver to do the right thing. “Here’s that $1,000,000 purchase order! I’m done, go party!”. If the receiving service goes into the bug-induced loop of recurring death, you’ve got two problems: You have a nasty bug that’s probably difficult to repro since it happens under stress, and you’ve got a $1,000,000 purchase order unhappily sitting in a dark hole. Guess what your great-grand-boss’ boss cares more about.

The second, technically slightly more headache-causing problem with poison messages (if that’s possible to imagine) is that they just sit there with all the gold and diamonds that they might represent, but they are effectively just a bunch of (if you’re lucky) XML goo. Telling a system operator to go and check the poison message queues or to surface their contents to him/her and look what’s going on there is probably not a winning strategy.

So what to do? Your high-throughput automated-processing solution that does the regular business behind the queue has left the building for lunch. That much is clear. How do you hook in some alternate processing path that does at least surface the problem to an operator or “information worker”– or even a call center agent pool – in a legible and intelligible fashion so that a human can look at the problem and try finding a fix? In the end, we’ve got the best processing unit for non-deterministic and unexpected events sitting  between our shoulders, one would hope. How about writing a slightly less automated service alternative that’s easy to adjust and try to get the issue surfaced to someone or just try multiple things [Did someone just say “Workflow”?] – and hook that straight up to where all the bad stuff lands: the poison queue.

Here’s the code. I just coded that up for illustrative purposes and hence there’s absolutely room for improvement. I’m going to put the project files up on wcf.netfx3.com and will update this post with the link. We’ll start with the boilerplate stuff and the “regular” service:

using System;
using System.Collections.Generic;
using System.Text;
using System.ServiceModel.Channels;
using System.ServiceModel;
using System.Runtime.Serialization;
using System.ServiceModel.Description;
using System.Workflow.Runtime;
using ServerErrorHandlingWorkflow;
using ServerData;

namespace Server
{
    [
ServiceContract(Namespace=Program.ServiceNamespaceURI)]
   
interface IApplicationContract
    {
        [
OperationContract(IsOneWay=true)]
       
void SubmitData(ApplicationData data);
    }


    [
ServiceBehavior(TransactionAutoCompleteOnSessionClose=true,
                     ReleaseServiceInstanceOnTransactionComplete=
true)]
   
class ApplicationService : IApplicationContract
    {
        [
OperationBehavior(TransactionAutoComplete=true,TransactionScopeRequired=true),
         System.Diagnostics.
DebuggerStepThrough]
       
public void SubmitData(ApplicationData data)
        {
           
throw new Exception("The method or operation is not implemented.");
        }
    }

Not much excitement here except that the highlighted line will always cause the service to tank. In real life, the path to that particular place where the service consistently finds its way into a trouble-spot is more convoluted and may involve a few thousand lines, but this is a good approximation for what happens when you hit a poison message. Stuff keeps failing.

The next snippet is our alternate service. Instead of boldly trying to do complex processing, it simply punts the message data to a Workflow. That’s assuming that the message isn’t completely messed up to begin with and can indeed be de-serialized. To mitigate that scenario we could also use a one-way universal contract and be even more careful. The key difference between this and the “regular” service is that the alternate service turns off the WCF address filter check. We’ll get back to that. 


    [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]
    class ApplicationErrorService : IApplicationContract
    {
       
public void SubmitData(ApplicationData data)
        {
           
Dictionary<string,object> workflowArgs = new Dictionary<string,object>();
            workflowArgs.Add(
"ApplicationData",data);
           
WorkflowInstance workflowInstance =
               
Program.WorkflowRuntime.CreateWorkflow(
                         
typeof(ErrorHandlingWorkflow),
                          workflowArgs);
            workflowInstance.Start();
        }
    }

So now we’ve got the fully automated middle-of-the-road default service and our “what do we do next” alternate service. Let’s hook them up.

    class Program
    {
       
public const string ServiceNamespaceURI =
               
"http://samples.microsoft.com/2007/03/WCF/PoisonHandling/Service";
       
public static WorkflowRuntime WorkflowRuntime = new WorkflowRuntime();

       
static void Main(string[] args)
        {
           
string msmqQueueName = Properties.Settings.Default.QueueName;
           
string msmqPoisonQueueName = msmqQueueName+";poison";
           
string netMsmqQueueName =
                
"net.msmq://" + msmqQueueName.Replace('\\', '/').Replace("$","");
           
string netMsmqPoisonQueueName = netMsmqQueueName+";poison";
           
           
if (!System.Messaging.MessageQueue.Exists(msmqQueueName))
            {
                System.Messaging.
MessageQueue.Create(msmqQueueName, true);
            }

First – and for this little demo only – we’re setting up a local queue and do a little stringsmithing to get the app.config stored MSMQ format queue name into the net.msmq URI format. Next …

            ServiceHost applicationServiceHost = new ServiceHost(typeof(ApplicationService));
           
NetMsmqBinding queueBinding = new NetMsmqBinding(NetMsmqSecurityMode.None);
            queueBinding.ReceiveErrorHandling =
ReceiveErrorHandling.Move;
            queueBinding.ReceiveRetryCount = 1;
            queueBinding.RetryCycleDelay =
TimeSpan.FromSeconds(1);
            applicationServiceHost.AddServiceEndpoint(
typeof(IApplicationContract),
                                                      queueBinding,
                                                      netMsmqQueueName);

Now we’ve bound the “regular” application service to the queue. I’m setting the binding parameters (look them up at your leisure) in a way that we’re failing very fast here. By default, the RetryCycleDelay is set to 30 minutes, which means that WCF is giving you a reasonable chance to fix temporary issues while stuff hangs out in the retry queue. Now for the poison handler service:

      
           
ServiceHost poisonHandlerServiceHost = new ServiceHost(typeof(ApplicationErrorService));
           
NetMsmqBinding poisonBinding = new NetMsmqBinding(NetMsmqSecurityMode.None);
            poisonBinding.ReceiveErrorHandling =
ReceiveErrorHandling.Drop;
            poisonHandlerServiceHost.AddServiceEndpoint(
typeof(IApplicationContract),
                                                        poisonBinding,
                                                        netMsmqPoisonQueueName);

Looks almost the same, hmm? The trick here is that we’re pointing this one to the poison queue into which the regular service drops all the stuff that it can’t deal with. Otherwise it’s (almost) just a normal service. The key difference between the ApplicationErrorService service and its sibling is that the poison-message handler service implementation is decorated with [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)].Since the original message was sent to the a different (the original) queue and we’re now looking at a sub-queue that has a different name and therefore a different WS-Addressing:To identity, WCF would normally reject processing that message. With this behavior setting we can tell WCF to ignore that and have the service treat the message as if it landed at the right place – which is what we want.

And now for the unspectacular run-it and drop-a-message-into-queue finale:

            applicationServiceHost.Open();
            poisonHandlerServiceHost.Open();
           
           
Console.WriteLine("Application running");

           
ChannelFactory<IApplicationContract> client =
              
new ChannelFactory<IApplicationContract>(queueBinding,
                                                        netMsmqQueueName);
           
IApplicationContract channel = client.CreateChannel();
            
ApplicationData data = new ApplicationData();
            data.FirstName =
"Clemens";
            data.LastName =
"Vasters";
            channel.SubmitData(data);    
            ((
IClientChannel)channel).Close();

           
Console.WriteLine("Press ENTER to exit");
       
           
Console.ReadLine();
        }
    }
}

The Workflow that’s hooked up to the poison handler in my particular sample project does nothing big. It’s got a property that is initialized with the data item and just has a code activity that spits out the message to the console. It could send an email, page an operator through messenger, etcetc. Whatever works.

Categories: MSMQ | WCF

January 18, 2007
@ 04:04 AM

Doug Purdy, the (my) Group Program Manager of the Connected Framework team (owning WCF and WF) just got email:

Dear Douglas, 

Last year you emailed us regarding .NET Framework 3.0. We are emailing you to let you know that we have installed .NET Framework 3.0 on our webservers.

 

We continue to improve our product so please keep an eye out on our service.

 

Have a great day.

 

DiscountASP.NET

- Microsoft Gold Partner

- 2006 and 2005 Product of the Year: asp.netPRO Magazine Readers' Choice

- Best ASP.NET Web Hosting: 2006 and 2005 asp.netPRO Magazine Readers' Choice

- Best .NET Hosting Provider: .NET Developer's Journal 2005 Readers' Choice

How is your ISP doing?

Categories: WCF | Workflow

The request below has been handed to me by the BizTalk team here at Microsoft. If you have programmed in WCF, happen to be at or around the Microsoft Redmond campus at that time and want to help out, send an email until this Friday to uccoord at microsoft.com with the subject line "BizTalk Usability Study" to sign up:

 

 

Microsoft is conducting research on BizTalk Server, and are seeking Developers who have a working knowledge of this product and WCF.  If you are a current BizTalk Developer, with WCF experience, the team would like to invite you to participate in this research. 

 

Studies are currently being scheduled for Monday Oct 30 through Friday Nov 3, 2006 in Redmond, WA.  Each study will be scheduled at your convenience and will run approximately 2 hours.  This is a unique opportunity to provide feedback on the adapter creation process in BizTalk.  

 

Your input and participation is extremely valuable that helps ensure that your needs are met when interacting with BizTalk and WCF. If scheduled for a usability study, you will receive a retail software product selection for your time and feedback.  Some of the items include Office Pro and VisualStudio.NET

 

Categories: BizTalk | WCF

September 25, 2006
@ 11:51 PM

I've posted the current WCF Training Providers list on wcf.netfx3.com this weekend. All of these folks are running custom-built training classes for WCF and until we here at MS come out with the "official" Microsoft Official Curriculum" for WCF and the other .NET Framework 3.0 technologies (which will take several months from when Vista ships), these offerings are indeed our preferred option for you to get WCF training.

One event that I'll personally highlight and happily and shamelessly advertise is a cooperation by my ex-firm newtelligence and my friends at IDesign, because it's coming up very soon. One of the coolest aspect of that class is that it is scheduled to take place in Europe's #1 vacation spot Mallorca, which means that cheap flights should be available from anywhere and the weather is nice, too. Registration is open and my understanding is that it closes this week! I wish I could go.

Categories: Indigo | WCF

September 1, 2006
@ 09:00 PM

Indigo The Windows Communication Foundation's RC1 bits are now live. RC means "Release Candidate" and our team is really, really serious about this release being as close to what we intend to ship as we can ever get. Our database view with unresolved code-defects is essentially empty (there is a not more of a handful of small fixes for very esoteric scenarios that we're still doing for RTM). The time of breaking changes is absolutely and finally over for "WCF Version 1".

The team is very excited about this. There's lots of joy in the hallways. We're getting close to being done. Remember when you saw the first WS-* specs popping up out there some 6 years ago? That's when this thing was started. You can just imagine how pumped the testers, developers and program managers are around here. And even though I am new to the family, I get to celebrate a little too. Greatness.

Get the RC1 for the .NET Framework 3.0 with the WCF bits from here:
http://www.microsoft.com/downloads/details.aspx?FamilyId=19E21845-F5E3-4387-95FF-66788825C1AF&displaylang=en 

There's one little issue with the Visual Studio Tools aligned with that version, so it will take another day or so until those get uploaded.

As always, if you find problems, tell us: http://connect.microsoft.com/wcf

Categories: Indigo | WCF | Web Services

June 21, 2006
@ 08:57 AM
In the ongoing MSDN Architecture Webcast Series with broad coverage of all things WCF (see the "Next Generation: .NET Framework 3.0 and Vista" section for archived and upcoming content), I am on today (8AM PST, 11AM EST, 17:00 CET), live from my kitchen table in Germany, with a remix of my "RSS, REST, POX, Sites-as-Services" talks from MIX06 and TechEd.
Categories: Talks | MIX06 | TechEd US | WCF

June 21, 2006
@ 08:39 AM

Cool. I hadn't even seen this demo until now, even though we already have it for a while. Our technical evangelist Craig McMurtry posted the "Digital Fortress" demo, which is an implementation of the computer systems that play major roles in Dan Brown's novel "Digital Fortress". There are several reasons why I find this demo interesting and pretty amusing.

First of all, it has a "Hollywood-Style UI", which is funny. It's got the huge full-screen login screen with a "sort-of-looks-like-the-NSA" logo, a big count-down clock and a "control screen" (below) with the gratuitous graphics and big buttons one might expect. The other thing that's very interesting is that it is a management tools demo (of all things). The key to bust the evil conspiracy is to trace suspicious network activity across many nodes on the network and the script packaged with the demo shows you how to get that done using the built-in WCF tracing facilities. Download.

 

Categories: MSDN | Indigo | WCF

June 18, 2006
@ 12:56 PM

[Note to self: Schedule the video taping session early in a bound-to-be-stressful week, not 2 hours before you need to leave for the airport on Friday.]

MSDN TV has a new episode featuring yours truly speaking about WCF bindings (and what they cause in the channel stack).

Categories: MSDN | Indigo | WCF

I was sad when "Indigo" and "Avalon" went away. It'd be great if we'd have a pool of cool legal-approved code-names for which we own the trademark rights and which we could stick to. Think Delphi or Safari. "Indigo" was cool insofar as it was very handy to refer to the technology set, but was removed far enough from the specifics that it doesn't create a sharply defined, product-like island within the larger managed-code landscape or has legacy connotations like "ADO.NET".  Also, my talks these days could be 10 minutes shorter if I could refer to Indigo instead of "Windows Communications Foundation". Likewise, my job title wouldn't have to have a line wrap on the business card of I ever spelled it out in full.

However, when I learned about the WinFX name going away (several weeks before the public announcement) and the new "Vista Wave" technologies (WPF/WF/WCF/WCS) being rolled up under the .NET Framework brand, I was quite happy. Ever since it became clear in 2004 that the grand plan to put a complete, covers-all-and-everything managed API on top (and on quite a bit of the bottom) of everything Windows would have to wait until siginificantly after Vista and that therefore the Win16>Win32>WinFX continuity would not tell the true story, that name made only limited sense to stick to. The .NET Framework is the #1 choice for business applications and a well established brand. People refer to themselves as being "dotnet" developers. But even though the .NET Framework covers a lot of ground and "Indigo", "Avalon", "InfoCard", and "Workflow" are overwhelmingly (or exclusively) managed-code based, there are still quite a few things in Windows Vista that still require using P/Invoke or COM/Interop from managed code or unmanaged code outright. That's not a problem. Something has to manage the managed code and there's no urgent need to rewrite entire subsystems to managed code if you only want to add or revise features. 

So now all the new stuff is now part of the .NET Framework. That is a good, good, good change. This says what it all is.

Admittedly confusing is the "3.0" bit. What we'll ship is a Framework 3.0 that rides on top of the 2.0 CLR and includes the 2.0 versions of the Base-Class Library, Windows Forms, and ASP.NET. It doesn't include the formerly-announced-as-to-be-part-of-3.0 technologies like VB9 (there you have the version number consistency flying out the window outright), C# 3.0, and LINQ. Personally, I think that it might be a tiny bit less confusing if the Framework had a version-number neutral name such as ".NET Framework 2006" which would allow doing what we do now with less potential for confusion, but only a tiny bit. Certainly not enough to stage a war over "2006" vs. "3.0".

It's a matter of project management reality and also one of platform predictability that the ASP.NET, or Windows Forms teams do not and should not ship a full major-version revision of their bits every year. They shipped Whidbey (2.0) in late 2005 and hence it's healthy for them to have boarded the scheduled-to-arrive-in-2007 boat heading to Orcas. We (the "WinFX" teams) subscribed to the Vista ship docking later this year and we bring great innovation which will be preinstalled on every copy of it. LINQ as well as VB9 and C# incorporating it on a language-level are very obviously Visual Studio bound and hence they are on the Orcas ferry as well. The .NET Framework is a steadily growing development platform that spans technologies from the Developer Division, Connected Systems, Windows Server, Windows Client, SQL Server, and other groups, and my gut feeling is that it will become the norm that it will be extended off-cycle from the Developer Division's Visual Studio and CLR releases. Whenever a big ship docks in the port, may it be Office, SQL, BizTalk, Windows Server, or Windows Client, and as more and more of the still-unmanaged Win32/Win64 surface area gets wrapped, augmented or replaced by managed-code APIs over time and entirely new things are added, there might be bits that fit into and update the Framework.  

So one sane way to think about the .NET Framework version number is that it merely labels the overall package and not the individual assemblies and components included within it. Up to 2.0 everything was pretty synchronized, but given the ever-increasing scale of the thing, it's good to think of that being a lucky (even if intended) coindicence of scheduling. This surely isn't the first time that packages were versioned independently of their components. There was and is no reason for the ASP.NET team to gratuitously recompile their existing bits with a new version number just to have the GAC look pretty and to create the illusion that everything is new - and to break Visual Studio compatibility in the process.

Of course, once we cover 100% of the Win32 surface area, we can rename it all into WinFX again ;-)  (just kidding)

[All the usual "personal opinion" disclaimers apply to this post]

Update: Removed reference to "Win64".

Categories: IT Strategy | Technology | ASP.NET | Avalon | CLR | Indigo | Longhorn | WCF | Windows

I've been quoted as to have said so at TechEd and I'll happily repeat it: "XML is the assembly language of Web 2.0", even though some (and likely some more) disagree. James Speer writes "Besides, Assembly Language is hard, XML isn’t." , which I have to disagree with.

True, throwing together some angle brackets isn't the hardest thing in the world, but beating things into the right shape is hard and probably even harder than in assembly. Yes, one can totally, after immersing oneself in the intricacies of Schema, write complex types and ponder for days and months about the right use of attributes and elements. It's absolutely within reach for a WSDL zealot to code up messages, portTypes and operations by hand. But please, if you think that's the right way to do things, I also demand that you write and apply your security policy in angle bracket notation from the top of your head and generate WCF config from that using svcutil instead of just throwing a binding together, because XML is so easy. Oh? Too hard? Well, it turns out that except for our developers and testers who are focusing on getting these mappings right, nobody on our product team would probably ever even want to try writing such a beast by hand for any code that sits above the deep-down guts of our stack. This isn't the fault of the specifications (or people here being ignorant), but it's a function of security being hard and the related metadata being complex. Similar things, even though the complexity isn't quite as extreme there, can be said about the other extensions to the policy framework such as WS-RM Policy or those for WS-AT.

As we're getting to the point where full range of functionality covered by WS-* specifications is due to hit the mainstream by us releasing WCF and our valued competitors releasing their respective implementations, hand-crafted contracts will become increasingly meaningless, because it's beyond the capacity of anyone whose job it is to build solutions for their customers to write complete set of contracts that not only ensures simple data interop but also protocol interop. Just as there were days that all you needed was assembly and INT21h to write a DOS program (yikes) or knowledge of "C" alongside stdio.h and fellows to write anything for everthing, things are changing now in the same way in Web Services land. Command of XSD and WSDL is no longer sufficient, all the other stuff is just as important to make things work.

Our WCF [DataContract] doesn't support attributes. That's a deliberate choice because we want to enforce simplicity and enhance interoperability of schemas. We put an abstraction over XSD and limit the control over it, because we want to simplify the stuff that goes across the wire. We certainly allow everyone to use the XmlSerializer with all of it's attribute based fine-grained control over schema, even though there are quite a few Schema constructs that even that doesn't support when building schema from such metadata. If you choose to, you can just ignore all of our serialization magic and fiddle with the XML Infoset outright and supply your own schema. However, XML and Schema are specifications that everyone and their dog wanted to get features into and Schema is hopelessly overengineered. Ever since we all (the industry, not only MS) boarded the SOAP/WS train, we're debating how to constrain the features of that monster to a reasonable subset that makes sense and the debate doesn't want to end.

James writes that he "take[s] a lot of care in terms of elements vs. attributes and mak[es] sure the structure of the XML is business-document-like", which only really makes sense if XML documents used in WS scenarios were meant for immediate human consumption, which they're not.

We want to promote a model that is simple and consistent to serialize to and from on any platform and that things like the differentiation between attributes and elements doesn't stand in the way of allowing a 1:1 mapping into alternate, non-XML serialization formats such as JSON or what-have-you (most of which don't care about that sort of differentiation).  James' statement about "business-document-like" structures is also interesting considering EDIFACT, X.12 or SWIFT, all of which only know records, fields and values, and don't care about that sort of subtle element/attribute differentation, either. (Yes, no of those might be "hip" any more, but they are implemented and power a considerable chunk of the world economy's data exchange).

By now, XML is the foundation for everything that happens on the web, and I surely don't want to have it go away. But have arrived at the point where matters have gotten so complicated that a layer of abstraction over pretty much all things XML has become a necessity for everyone who makes their money building customer solutions and not by teaching or writing about XML. In my last session at TechEd, I asked a room of about 200 people "Who of you hand-writes XSLT transforms?" 4 hands. "Who of you used to hand-write XSLT transforms?" 40+ hands. I think it's safe to assume that a bunch of those folks who have sworn off masochism and no longer hand-code XSLT are now using tools like the BizTalk Mapper or Altova's MapForce, which means that XSL/T is alive and kicking, but only downstairs in the basement. However, the abstractions that these tools provide also allow bypassing XSLT altogether and generate the transformation logic straight into compiled C++, Java, or C# code, which is what MapForce offers. WSDL is already walking down that path.

Categories: TechEd US | Indigo | WCF | Web Services

My first of two sessions this week here at TechEd is on Thursday, at 2:45pm in room 153ABC on "Designing Bindings and Contracts".

I realize that the title sounds a bit abstract and a different way to put this would be "How to choose the correct bindings and what to consider about contracts in a variety of architectual scenarios", but that would have been a bit long as a title. in the talk I'll explain the system-defined bindings that we ship in the product so that we've got stuff to work with and then I'll get out the tablet pen and draw up a bunch of scenarios and how our bindings (read: communication options) make sense in those. What's the best choice for N-Tier inside and outside of the corporate perimeter, what do you do for queueing-style apps, how do you implement volatile or durable 1:1 pub/sub, how do you implement broadcasts and where do they make sense, etc.

Categories: Architecture | Indigo | WCF

We've just released the "Windows Communication Foundation RSS Toolkit" on our new community site. This toolkit, which comes with complete source code, illustrates how to expose ATOM and RSS feeds through WCF endpoints. I will discuss the toolkit in my session CON339, Room 107ABC, Friday 10:45am here at TechEd.

Categories: TechEd US | Indigo | WCF

Just so that you know: In addition to the regular breakout sessions, we have a number of interactive chalk talks scheduled here at the Connected Systems Technical Learning Center in the Expo Hall. Come by.

Categories: TechEd US | Technology | Indigo | WCF | Workflow

June 12, 2006
@ 12:48 PM

This is my first TechEd! - as a Microsoft employee. It's of course not my first tech event in my new job (Egypt, Jordan, UK, France, Switzerland, Holland, Belgium, Denmark, Las Vegas/USA, Slovenia, and Israel are on the year-to-date list - on top of three long-distance commutes to Redmond), but the big TechEds are always special. It'll be fun. Come by the Connected Systems area in the exhibition hall and find me to chat if you are here in Boston.

Frankly, I didn't expect a Sunday night keynote to be nearly as well attended as it was, but it looks that experiment mostly worked. The theme of the keynote were Microsoft's 4 Core Promises for IT Pros and Developers nicely wrapped into a video story based on the TV show "24" and with that show's IT superwoman Chloe O'Brian (actress Mary Lynn Rajskub) up on stage with Bob Muglia (our team's VP far up above in my chain of command), who acted as the MC for the show. Finally we got an apology from a Hollywood character for all the IT idiocy the put up on screen. Thanks, Chloe.

Our team has a lot of very cool stuff to talk about at this show. The first highlight is John Justice's WCF Intro talk (Session CON208, Room 157ABC) today at 5:00pm with a "meet the team" panel Q&A session at the end. Block the time.

Categories: Technology | Indigo | WCF

Late last night, my colleague James Conard, who has worked and worked and worked tirelessly on this for the past few months and has shown great patience with a big group of people pulling into all sorts of directions as we got this together has flipped the switch to turn on the new .NET Framework 3.0 community portal family at netfx3.com

The new Windows Communication Foundation community home is at http://wcf.netfx3.com and it's a great improvement over the small, hastily-thown-together site that we used to have. There'll be a number of news bits and announcements throughout and after TechEd at the new site, so it might be a good idea to subscribe to the feed now. 

My official "Welcome!" post over on the new site is here, the James' site-wide welcome message can be found here.

Categories: Indigo | WCF

Doug Purdy, our Group Program Manager, runs a wodge of home-cooked code every now and then to produce the link list below. I thought that you all out there might find that valuable and therefore I stole a copy of the list for you.

Workflow
Passivation (Dehydration, Unloading) Policy [5/19/2006 4:38:00 PM] -- Advanced Workflow: Enabling Tricky Scenarios
A couple of great new workflow articles [5/29/2006 3:28:00 PM] -- Paul Andrew
WinFX Beta 2 is Released [5/23/2006 6:20:00 PM] -- Paul Andrew
Bill Gates exec email mentions Windows Workflow Foundation [5/17/2006 9:11:00 AM] -- Paul Andrew
Define and execute WF rules on any target object [5/21/2006 11:50:00 AM] -- Moustafa Khalil Ahmed's Space
Services and the Business/IT Gap [5/30/2006 8:30:59 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
WorkflowDesigner hosting and Rules [6/1/2006 11:13:15 PM] -- Jon Flanders' Blog
WorkflowInstance.GetWorkflowDefinition [6/1/2006 10:27:50 PM] -- Jon Flanders' Blog
Absolutely - I am a Quicklearn instructor - this proves it [5/25/2006 10:49:02 AM] -- Jon Flanders' Blog
WF and Serialization Part One [5/23/2006 9:42:25 AM] -- Jon Flanders' Blog
Dave Green on using workflow [5/17/2006 9:40:10 AM] -- Jon Flanders' Blog
TechEd 2006 Chalk Talk Schedule [6/2/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Bracha and Bray on Continuations [5/20/2006 7:11:00 AM] -- Don Box's Spoutlet
WinFX Beta2 has officially shipped [5/23/2006 11:03:05 PM] -- OhmBlog
WF Q & A [5/23/2006 8:18:00 AM] -- Jeffrey Schlimmer's Blog
VSlive 2006 [5/18/2006 2:14:00 PM] -- Welcome to The Metaverse
Biztalk WSE 3.0 Adapter Ships [5/23/2006 9:21:00 PM] -- Mark Fussell's WebLog
TechEd 2006: WCF and WF Chalk Talk Schedule [6/2/2006 9:42:00 PM] -- Kavitak's WebLog
TechEd 2006 - Chalk Talks on Custom Channels [5/20/2006 7:43:00 PM] -- Kavitak's WebLog

Transactions
Passivation (Dehydration, Unloading) Policy [5/19/2006 4:38:00 PM] -- Advanced Workflow: Enabling Tricky Scenarios
WF and Serialization Part One [5/23/2006 9:42:25 AM] -- Jon Flanders' Blog
TechEd 2006 Chalk Talk Schedule [6/2/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
WCF Webcasts in June [5/31/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Versioning for Addresses, Envelopes, and Messages [5/30/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Creating Custom Bindings [5/25/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
An alternative "WCF to IBM Mainframe CICS" approach [6/1/2006 10:27:00 AM] -- distilled
WinFX Beta 2 is out there [5/23/2006 8:19:00 AM] -- distilled
Rev your transaction engines for WinFX Beta 2 [5/22/2006 9:53:00 PM] -- distilled

Indigo
WinFX Beta 2 is Released [5/23/2006 6:20:00 PM] -- Paul Andrew
So What Is A WCF Configuration Extension Anyways? [5/26/2006 10:44:00 AM] -- Mark Gabarra's Blog
Nothing this week [5/16/2006 12:05:00 PM] -- Mark Gabarra's Blog
Is .NET Remoting Dead? [5/26/2006 9:19:17 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Look, look, my blog is on MSDN [5/26/2006 9:19:11 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Lost questions [5/17/2006 11:23:21 PM] -- Brain.Save()
TS-5540 Summary by an audience [5/30/2006 6:08:51 PM] -- Arun Gupta's Blog
TechEd 2006 Chalk Talk Schedule [6/2/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
WCF Webcasts in June [5/31/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Versioning for Addresses, Envelopes, and Messages [5/30/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Choosing a Transport [5/24/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Today's Real News: Beta 2 Released [5/23/2006 12:00:00 PM] -- Nicholas Allen's Indigo Blog
Resources for Channel Authors [5/17/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Building a Custom Message Encoder to Record Throughput, Part 4 [5/16/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
WCF Impersonation for Hosted Services [5/18/2006 2:19:00 AM] -- Wenlong Dong's Blog
WinFX Beta2 has officially shipped [5/23/2006 11:03:05 PM] -- OhmBlog
UnREST over WS-* and other "enterprisey" things [5/17/2006 8:38:54 AM] -- TheArchitect.co.uk - Jorgen Thelin's weblog
httpcfg Flag Weirdness [5/16/2006 6:18:00 AM] -- Musings from Gudge
VSlive 2006 [5/18/2006 2:14:00 PM] -- Welcome to The Metaverse
WSE 3.0 in June 2006 MSDN Magazine [5/23/2006 10:07:00 PM] -- Mark Fussell's WebLog
Biztalk WSE 3.0 Adapter Ships [5/23/2006 9:21:00 PM] -- Mark Fussell's WebLog
TechEd 2006: WCF and WF Chalk Talk Schedule [6/2/2006 9:42:00 PM] -- Kavitak's WebLog
Beta2 of WinFX Runtime Components v3.0 now available [5/23/2006 1:43:00 PM] -- Kavitak's WebLog
TechEd 2006 - Chalk Talks on Custom Channels [5/20/2006 7:43:00 PM] -- Kavitak's WebLog
An alternative "WCF to IBM Mainframe CICS" approach [6/1/2006 10:27:00 AM] -- distilled
WinFX Beta 2 is out there [5/23/2006 8:19:00 AM] -- distilled
Rev your transaction engines for WinFX Beta 2 [5/22/2006 9:53:00 PM] -- distilled

Standards/Protocols
Define and execute WF rules on any target object [5/21/2006 11:50:00 AM] -- Moustafa Khalil Ahmed's Space
So What Is A WCF Configuration Extension Anyways? [5/26/2006 10:44:00 AM] -- Mark Gabarra's Blog
Microsoft Architect Connections (MSAC) [6/1/2006 7:38:00 AM] -- Service Station, by Aaron Skonnard
Autonomy isn't Autonomy - and a few words about Caching. [6/1/2006 7:18:43 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Services and the Business/IT Gap [5/30/2006 8:30:59 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Is .NET Remoting Dead? [5/26/2006 9:19:17 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Look, look, my blog is on MSDN [5/26/2006 9:19:11 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Blogging from Office 12 [5/27/2006 6:40:22 AM] -- Brain.Save()
Hanselminutes Podcast 19 [5/31/2006 12:15:40 AM] -- ComputerZen.com - Scott Hanselman
Hanselminutes Podcast 18 [5/25/2006 9:26:25 PM] -- ComputerZen.com - Scott Hanselman
Subtle Behaviors in the XML Serializer can kill [5/24/2006 11:44:25 PM] -- ComputerZen.com - Scott Hanselman
Articles on Sun/Microsoft interoperability [5/18/2006 1:16:35 AM] -- Arun Gupta's Blog
Introducing wsit.dev.java.net [5/16/2006 5:23:06 PM] -- Arun Gupta's Blog
Ballmer makes Microsoft's case to Wall Street [5/31/2006 6:48:00 AM] -- Todd Bishop's Microsoft Blog @ SeattlePI.com
TechEd 2006 Chalk Talk Schedule [6/2/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Inside the Standard Bindings: BasicHttp [6/1/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
WCF Webcasts in June [5/31/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Versioning for Addresses, Envelopes, and Messages [5/30/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Choosing a Transport [5/24/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Resources for Channel Authors [5/17/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Building a Custom Message Encoder to Record Throughput, Part 4 [5/16/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
WCF Impersonation for Hosted Services [5/18/2006 2:19:00 AM] -- Wenlong Dong's Blog
Developers fail to care about one sided religious war [5/25/2006 4:56:51 PM] -- Marc's space terminal
Don't be that guy (EPR version) [5/22/2006 3:07:32 PM] -- Marc's space terminal
VB9 and Atom [5/17/2006 9:27:00 PM] -- Don Box's Spoutlet
On the C# 3.0 Preview: Some Thoughts on LINQ [5/17/2006 6:35:13 AM] -- Dare Obasanjo aka Carnage4Life
UnREST over WS-* and other "enterprisey" things [5/17/2006 8:38:54 AM] -- TheArchitect.co.uk - Jorgen Thelin's weblog
So you want to learn WSE 3.0? A short primer on how and where to start. [5/25/2006 8:49:00 PM] -- Mark Fussell's WebLog
Biztalk WSE 3.0 Adapter Ships [5/23/2006 9:21:00 PM] -- Mark Fussell's WebLog
Beta2 of WinFX Runtime Components v3.0 now available [5/23/2006 1:43:00 PM] -- Kavitak's WebLog
WS-Policy Working Group [6/2/2006 5:50:09 AM] -- Chris Ferris
Two articles, one good and one bad... [5/19/2006 7:30:00 AM] -- XML Nation

REST
Microsoft Architect Connections (MSAC) [6/1/2006 7:38:00 AM] -- Service Station, by Aaron Skonnard
Developers fail to care about one sided religious war [5/25/2006 4:56:51 PM] -- Marc's space terminal
Windows Live Gadgets SDK Released [5/26/2006 11:09:15 AM] -- Dare Obasanjo aka Carnage4Life
New Version of Windows Live Local Shipped [5/24/2006 10:12:31 AM] -- Dare Obasanjo aka Carnage4Life
My Microsoft [5/18/2006 12:33:29 PM] -- TheArchitect.co.uk - Jorgen Thelin's weblog

POX
Microsoft Architect Connections (MSAC) [6/1/2006 7:38:00 AM] -- Service Station, by Aaron Skonnard
Choosing a Transport [5/24/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog

SOA
Autonomy isn't Autonomy - and a few words about Caching. [6/1/2006 7:18:43 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Services and the Business/IT Gap [5/30/2006 8:30:59 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
TechEd 2006 Chalk Talk Schedule [6/2/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Putting the User back into SOA - my first ARCast! [5/19/2006 11:36:00 AM] -- simon.says
Two articles, one good and one bad... [5/19/2006 7:30:00 AM] -- XML Nation
Noted [6/1/2006 8:51:06 AM] -- Barry Briggs' Weblog

Web Services
Autonomy isn't Autonomy - and a few words about Caching. [6/1/2006 7:18:43 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Services and the Business/IT Gap [5/30/2006 8:30:59 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Is .NET Remoting Dead? [5/26/2006 9:19:17 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Look, look, my blog is on MSDN [5/26/2006 9:19:11 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
TS-5540 Summary by an audience [5/30/2006 6:08:51 PM] -- Arun Gupta's Blog
JavaOne 2006 TS-5540 Slides [5/23/2006 11:31:05 AM] -- Arun Gupta's Blog
JavaOne 2006 - Project Tango Keynote Demo [5/17/2006 1:20:43 AM] -- Arun Gupta's Blog
TechEd 2006 Chalk Talk Schedule [6/2/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
WCF Webcasts in June [5/31/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Versioning for Addresses, Envelopes, and Messages [5/30/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Developers fail to care about one sided religious war [5/25/2006 4:56:51 PM] -- Marc's space terminal
Windows Live Gadgets SDK Released [5/26/2006 11:09:15 AM] -- Dare Obasanjo aka Carnage4Life
So you want to learn WSE 3.0? A short primer on how and where to start. [5/25/2006 8:49:00 PM] -- Mark Fussell's WebLog
Biztalk WSE 3.0 Adapter Ships [5/23/2006 9:21:00 PM] -- Mark Fussell's WebLog
An alternative "WCF to IBM Mainframe CICS" approach [6/1/2006 10:27:00 AM] -- distilled
[ANN] Tungsten 1.0 - Web services platform [5/24/2006 2:06:44 PM] -- Davanum Srinivas' weblog
Web Services are Dead, Long Live Web Services [5/25/2006 6:43:37 AM] -- mnot’s Web log
WS-Policy Working Group [6/2/2006 5:50:09 AM] -- Chris Ferris
Two articles, one good and one bad... [5/19/2006 7:30:00 AM] -- XML Nation

Remoting
Is .NET Remoting Dead? [5/26/2006 9:19:17 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions

WSE
Is .NET Remoting Dead? [5/26/2006 9:19:17 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
WCF Webcasts in June [5/31/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Resources for Channel Authors [5/17/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
So you want to learn WSE 3.0? A short primer on how and where to start. [5/25/2006 8:49:00 PM] -- Mark Fussell's WebLog
WSE 3.0 in June 2006 MSDN Magazine [5/23/2006 10:07:00 PM] -- Mark Fussell's WebLog
Biztalk WSE 3.0 Adapter Ships [5/23/2006 9:21:00 PM] -- Mark Fussell's WebLog

COM/MTS/COM+/EnterpriseService
Is .NET Remoting Dead? [5/26/2006 9:19:17 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
TechEd 2006 Chalk Talk Schedule [6/2/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog

IIS
Is .NET Remoting Dead? [5/26/2006 9:19:17 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions
Choosing a Transport [5/24/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
Building a Custom Message Encoder to Record Throughput, Part 4 [5/16/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
WCF Impersonation for Hosted Services [5/18/2006 2:19:00 AM] -- Wenlong Dong's Blog

MSMQ/System.Messaging
Is .NET Remoting Dead? [5/26/2006 9:19:17 AM] -- Clemens Vasters: Enterprise Development and Alien Abductions

Serialization
Subtle Behaviors in the XML Serializer can kill [5/24/2006 11:44:25 PM] -- ComputerZen.com - Scott Hanselman

Security
Introducing wsit.dev.java.net [5/16/2006 5:23:06 PM] -- Arun Gupta's Blog
TechEd 2006 Chalk Talk Schedule [6/2/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
UnREST over WS-* and other "enterprisey" things [5/17/2006 8:38:54 AM] -- TheArchitect.co.uk - Jorgen Thelin's weblog
VSlive 2006 [5/18/2006 2:14:00 PM] -- Welcome to The Metaverse
TechEd 2006 - Chalk Talks on Custom Channels [5/20/2006 7:43:00 PM] -- Kavitak's WebLog

AJAX
CEO Schmidt on question of Google browser [5/31/2006 11:46:00 AM] -- Todd Bishop's Microsoft Blog @ SeattlePI.com
WCF Webcasts in June [5/31/2006 2:00:00 AM] -- Nicholas Allen's Indigo Blog
[ANN] Tungsten 1.0 - Web services platform [5/24/2006 2:06:44 PM] -- Davanum Srinivas' weblog

System.Net
VB9 and Atom [5/17/2006 9:27:00 PM] -- Don Box's Spoutlet

Categories: WCF | Workflow

My PM colleague Nicholas Allen is certainly on my list for "best blogging newcomer of 2006".  He started in February, got hooked, and I am not sure whether he actually did leave the keyboard since then.

Nicholas just started a blog series that explains the system-defined (formely known as: standard-) bindings that we ship with WCF. He's got three of them explained now and my guess is that there are more to follow:

While you are there, make sure to subscribe to Nicholas' feed and also take a look around and look at earlier posts. His channel category is a gold mine and the same can be said of the transports and ... everything there is fabulous stuff.

Categories: Indigo | WCF

Christian Weyer stars in a new episode of the German dotnetproTV series and masterfully explains the Windows Communication Foundation. If you don't understand German, you may still enjoy Christian's flip-chart skills and overall good looks. ;-)

Christian Weyer – Microsoft Regional Director und allgemein anerkannter und geschätzter Web Services Erklärbar – ist der  Star der neuesten dotnetproTV Episode zum Thema Windows Communication Foundation. Ich habe mir die Episode gerade angesehen und … Holla die Waldfee! … das ist einer der besten Überblicke zu WCF, die ich bisher gesehen habe! Und der Dialog mit Ralf Westphal ist natürlich kurzweilig und interessant wie immer. Hut ab!

Und weil mir das Thema natürlich am Herzen liegt bin ich sehr froh, daß dotnetpro für diese Folge nicht nur einen „Teaser“ zur Verfügung stellt, sondern Christians ganze Show in der ganzen 370MB großen Herrlichkeit (der Link zum Video ist in der orangefarbenen Kiste hier auf der Seite). Runterladen! Gucken!

Categories: Indigo | WCF