We're all sinners. Lots of the authentication mechanisms on the Web are not even "best effort", but rather just cleartext transmissions of usernames and passwords that are easily intercepted and not secure at all. We're security sinners by using them and even more so by allowing this. However, the reality is that there's very likely more authentication on the Web done in an insecure fashion and in cleartext than using any other mechanism. So if you are building WCF apps and you decide "that's good enough" what to do?

WCF is - rightfully - taking a pretty hard stance on these matters. If you try to use any of the more advanced in-message authN and authZ mechnanisms such as the integration with the ASP.NET membership/role provider models, you'll find yourself in security territory and our security designers took very good care that you are not creating a config that results in the cleartext transmission of credentials. And for that you'll need certificates and you'll also find that it requires full trust (even in 3.5) to use that level of robust on-wire security.

dasBlog has (we're sinners, too) a stance on authentication that's about as lax as everyone else's stance in blog-land. There are not many MetaWeblog API endpoints running over https (as they rather should) that I've seen. 

So what I need for a bare minimum dasBlog install where the user isn't willing to get an https certificate for their site is a very simple, consciously insecure, bare-bones authentication and authorization mechanism for WCF services that uses the ASP.NET membership/role model (dasBlog will use that model as we switch to the .NET Framework 3.5 later this year). The It also needs to get completely out of the way when the service is configured with any real AuthN/AuthZ mechanism.

So here's a behavior (some C# 3.0 syntax, but easy to fix) that you can add to channel factories (client) and service endpoints (server) that will do just that. If you care about confidentiality of credentials on the wire don't use it. For this to work, you need to put the behavior on both ends. The behavior will do nothing (as intended) when the binding isn't the BasicHttpBinding with BasicHttpSecurityMode.None). The header will not show up in WSDL.

On the client, you simply add the behavior and otherwise set the credentials as you would usually do for UserName authentication. This makes sure that the client code stays compatible when you upgrade the wire protocol to a more secure (yet still username-based) binding via config.

MyClient remoteService = new MyClient();
remoteService.ChannelFactory.Endpoint.Behaviors.Add(new SimpleAuthenticationBehavior());
remoteService.ClientCredentials.UserName.UserName = "admin";
remoteService.ClientCredentials.UserName.Password = "!adminadmin";

On the server, you just configure your ASP.NET membership and role database. With that in place, you can even use role-based security attributes or any other authorization mechnanism you are accustomed to in ASP.NET. Just as on the client, the behavior goes out of the way and gives way for the "real thing" once you turn on security.

using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Description;
using System.ServiceModel.Dispatcher;
using System.ServiceModel.Security;
using System.Threading;
using System.Web.Security;
using System.Xml.Serialization;

namespace dasBlog.Storage
{
    [
DataContract(Namespace = Names.DataContractNamespace)]
    class SimpleAuthenticationHeader
    {
        [
DataMember]
       
public string UserName;
        [
DataMember]
       
public string Password;
    }

   
public class SimpleAuthenticationBehavior : IEndpointBehavior
    {
        #region IEndpointBehavior Members

       
public void AddBindingParameters(ServiceEndpoint endpoint, 
                                        
BindingParameterCollection bindingParameters)
        {
           
        }

       
public void ApplyClientBehavior(ServiceEndpoint endpoint, 
                                       
ClientRuntime clientRuntime)
        {
           
if (endpoint.Binding is BasicHttpBinding &&
                ((
BasicHttpBinding)endpoint.Binding).Security.Mode == BasicHttpSecurityMode.None )
            {
               
var credentials = endpoint.Behaviors.Find<ClientCredentials>();
               
if (credentials != null && credentials.UserName != null && credentials.UserName.UserName != null)
                {
                    clientRuntime.MessageInspectors.Add(
new ClientMessageInspector(credentials.UserName));                   
                }
            }
        }

       
public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher)
        {
           
if (endpoint.Binding is BasicHttpBinding &&
                ((
BasicHttpBinding)endpoint.Binding).Security.Mode == BasicHttpSecurityMode.None)
            {
                endpointDispatcher.DispatchRuntime.MessageInspectors.Add(
new DispatchMessageInspector());
            }
        }

       
public void Validate(ServiceEndpoint endpoint)
        {
           
        }

        #endregion

        class DispatchMessageInspector : IDispatchMessageInspector
        {
            #region IDispatchMessageInspector Members

           
public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext)
            {
               
int headerIndex = request.Headers.FindHeader("simpleAuthenticationHeader", "http://dasblog.info/2007/08/security");
               
if (headerIndex >= 0)
                {
                   
var header = request.Headers.GetHeader<SimpleAuthenticationHeader>(headerIndex);
                    request.Headers.RemoveAt(headerIndex);
                   
if ( Membership.ValidateUser(header.UserName, header.Password) )
                    {
                       
var identity = new FormsIdentity(new FormsAuthenticationTicket(header.UserName, false, 15));
                       
Thread.CurrentPrincipal = new RolePrincipal(identity);
                    }
                }
               
return null;
            }

           
public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
            {
               
            }

            #endregion
        }

       
class ClientMessageInspector : IClientMessageInspector
        {
            #region IClientMessageInspector Members

           
UserNamePasswordClientCredential creds;

           
public ClientMessageInspector(UserNamePasswordClientCredential creds)
            {
               
this.creds = creds;
            }

           
public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
            {
               
            }

           
public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, IClientChannel channel)
            {
                request.Headers.Add(
MessageHeader.CreateHeader("simpleAuthenticationHeader", http://dasblog.info/2007/08/security,
                                    new SimpleAuthenticationHeader{ UserName = creds.UserName, Password = creds.Password }));
                
return null;
            }

            #endregion
        }
    }
}

Categories: Indigo | WCF

August 21, 2007
@ 07:46 AM

UPDATE: The code has been updated. Ignore this post and go here.

I'm writing lots of code lately. I've rejoined the dasBlog community and I'm busy writing a prototype for the .NET Framework 3.5 version of dasBlog (we just released the 2.0 version, see http://www.dasblog.info/).

One of the goals of the prototype, which we'll eventually merge into the main codebase once the .NET Framework 3.5 is available at hosting sites is to standardize on WCF for all non-HTML endpoints. Since lots of the relevant inter-blog and blogging tool APIs are still based on XML-RPC, that called for an implementation of XML-RPC on WCF. I've just isolated that code and put it up on wcf.netfx3.com.

My XML-RPC implementation is a binding with a special encoder and a set of behaviors. The Service Model programming experience is completely "normal" with no special extension attributes. That means you can also expose the XML-RPC contracts as SOAP endpoints with all the advanced WCF bindings and features if you like.

The binding supports client and service side and is completely config enabled. Here's a snippet from the MetaWeblog contract:

[ServiceContract(Namespace = http://www.xmlrpc.com/metaWeblogApi)]
public interface IMetaWeblog : Microsoft.ServiceModel.Samples.XmlRpc.Contracts.Blogger.
IBlogger
{
   [OperationContract(Action="metaWeblog.editPost")]
   bool metaweblog_editPost(string postid,
                             string username,
                             string password,
                             Post post,
                             bool publish);

   [OperationContract(Action="metaWeblog.getCategories")]
   CategoryInfo[] metaweblog_getCategories( string blogid,
                                            string username,
                                            string password);
    ...
}

For your convenience I've included complete Blogger, MetaWeblog, and MovableType API contracts along with the respective data types in the test application. The test app is a small in-memory blog that you can use with the blogging function of Word 2007 as a client or some other blogging client for testing.

Of the other interesting XML-RPC APIs, the Pingback API has the following contract:

    [ServiceContract(Namespace="http://www.hixie.ch/specs/pingback/pingback")]
   
public interface
IPingback
    {
        [
OperationContract(Action="pingback.ping"
)]
       
string ping(string sourceUri, string
targetUri);
    }

and the WeblogUpdates API looks like this:

    [DataContract]
   
public struct
WeblogUpdatesReply
    {
        [
DataMember
]
       
public bool
flerror;
        [
DataMember
]
       
public string
message;
    }

    [
ServiceContract
]
   
public interface
IWeblogUpdates
    {
        [
OperationContract(Action = "weblogUpdates.extendedPing"
)]
       
WeblogUpdatesReply ExtendedPing(string weblogName, string weblogUrl, string checkUrl, string
rssUrl);
        [
OperationContract(Action="weblogUpdates.ping"
)]
       
WeblogUpdatesReply Ping(string weblogName, string
weblogUrl);
    }

I'm expecting some interop bugs since I've done a clean implementation from the specs, so if you find any please let me know.

The code is subject to the Microsoft samples license, which means that you can put it into your (blogging) apps. Enjoy.

Categories: MSDN | Indigo | WCF | Weblogs

September 25, 2006
@ 11:51 PM

I've posted the current WCF Training Providers list on wcf.netfx3.com this weekend. All of these folks are running custom-built training classes for WCF and until we here at MS come out with the "official" Microsoft Official Curriculum" for WCF and the other .NET Framework 3.0 technologies (which will take several months from when Vista ships), these offerings are indeed our preferred option for you to get WCF training.

One event that I'll personally highlight and happily and shamelessly advertise is a cooperation by my ex-firm newtelligence and my friends at IDesign, because it's coming up very soon. One of the coolest aspect of that class is that it is scheduled to take place in Europe's #1 vacation spot Mallorca, which means that cheap flights should be available from anywhere and the weather is nice, too. Registration is open and my understanding is that it closes this week! I wish I could go.

Categories: Indigo | WCF

September 1, 2006
@ 09:00 PM

Indigo The Windows Communication Foundation's RC1 bits are now live. RC means "Release Candidate" and our team is really, really serious about this release being as close to what we intend to ship as we can ever get. Our database view with unresolved code-defects is essentially empty (there is a not more of a handful of small fixes for very esoteric scenarios that we're still doing for RTM). The time of breaking changes is absolutely and finally over for "WCF Version 1".

The team is very excited about this. There's lots of joy in the hallways. We're getting close to being done. Remember when you saw the first WS-* specs popping up out there some 6 years ago? That's when this thing was started. You can just imagine how pumped the testers, developers and program managers are around here. And even though I am new to the family, I get to celebrate a little too. Greatness.

Get the RC1 for the .NET Framework 3.0 with the WCF bits from here:
http://www.microsoft.com/downloads/details.aspx?FamilyId=19E21845-F5E3-4387-95FF-66788825C1AF&displaylang=en 

There's one little issue with the Visual Studio Tools aligned with that version, so it will take another day or so until those get uploaded.

As always, if you find problems, tell us: http://connect.microsoft.com/wcf

Categories: Indigo | WCF | Web Services

June 21, 2006
@ 08:39 AM

Cool. I hadn't even seen this demo until now, even though we already have it for a while. Our technical evangelist Craig McMurtry posted the "Digital Fortress" demo, which is an implementation of the computer systems that play major roles in Dan Brown's novel "Digital Fortress". There are several reasons why I find this demo interesting and pretty amusing.

First of all, it has a "Hollywood-Style UI", which is funny. It's got the huge full-screen login screen with a "sort-of-looks-like-the-NSA" logo, a big count-down clock and a "control screen" (below) with the gratuitous graphics and big buttons one might expect. The other thing that's very interesting is that it is a management tools demo (of all things). The key to bust the evil conspiracy is to trace suspicious network activity across many nodes on the network and the script packaged with the demo shows you how to get that done using the built-in WCF tracing facilities. Download.

 

Categories: MSDN | Indigo | WCF

June 18, 2006
@ 12:56 PM

[Note to self: Schedule the video taping session early in a bound-to-be-stressful week, not 2 hours before you need to leave for the airport on Friday.]

MSDN TV has a new episode featuring yours truly speaking about WCF bindings (and what they cause in the channel stack).

Categories: MSDN | Indigo | WCF

I was sad when "Indigo" and "Avalon" went away. It'd be great if we'd have a pool of cool legal-approved code-names for which we own the trademark rights and which we could stick to. Think Delphi or Safari. "Indigo" was cool insofar as it was very handy to refer to the technology set, but was removed far enough from the specifics that it doesn't create a sharply defined, product-like island within the larger managed-code landscape or has legacy connotations like "ADO.NET".  Also, my talks these days could be 10 minutes shorter if I could refer to Indigo instead of "Windows Communications Foundation". Likewise, my job title wouldn't have to have a line wrap on the business card of I ever spelled it out in full.

However, when I learned about the WinFX name going away (several weeks before the public announcement) and the new "Vista Wave" technologies (WPF/WF/WCF/WCS) being rolled up under the .NET Framework brand, I was quite happy. Ever since it became clear in 2004 that the grand plan to put a complete, covers-all-and-everything managed API on top (and on quite a bit of the bottom) of everything Windows would have to wait until siginificantly after Vista and that therefore the Win16>Win32>WinFX continuity would not tell the true story, that name made only limited sense to stick to. The .NET Framework is the #1 choice for business applications and a well established brand. People refer to themselves as being "dotnet" developers. But even though the .NET Framework covers a lot of ground and "Indigo", "Avalon", "InfoCard", and "Workflow" are overwhelmingly (or exclusively) managed-code based, there are still quite a few things in Windows Vista that still require using P/Invoke or COM/Interop from managed code or unmanaged code outright. That's not a problem. Something has to manage the managed code and there's no urgent need to rewrite entire subsystems to managed code if you only want to add or revise features. 

So now all the new stuff is now part of the .NET Framework. That is a good, good, good change. This says what it all is.

Admittedly confusing is the "3.0" bit. What we'll ship is a Framework 3.0 that rides on top of the 2.0 CLR and includes the 2.0 versions of the Base-Class Library, Windows Forms, and ASP.NET. It doesn't include the formerly-announced-as-to-be-part-of-3.0 technologies like VB9 (there you have the version number consistency flying out the window outright), C# 3.0, and LINQ. Personally, I think that it might be a tiny bit less confusing if the Framework had a version-number neutral name such as ".NET Framework 2006" which would allow doing what we do now with less potential for confusion, but only a tiny bit. Certainly not enough to stage a war over "2006" vs. "3.0".

It's a matter of project management reality and also one of platform predictability that the ASP.NET, or Windows Forms teams do not and should not ship a full major-version revision of their bits every year. They shipped Whidbey (2.0) in late 2005 and hence it's healthy for them to have boarded the scheduled-to-arrive-in-2007 boat heading to Orcas. We (the "WinFX" teams) subscribed to the Vista ship docking later this year and we bring great innovation which will be preinstalled on every copy of it. LINQ as well as VB9 and C# incorporating it on a language-level are very obviously Visual Studio bound and hence they are on the Orcas ferry as well. The .NET Framework is a steadily growing development platform that spans technologies from the Developer Division, Connected Systems, Windows Server, Windows Client, SQL Server, and other groups, and my gut feeling is that it will become the norm that it will be extended off-cycle from the Developer Division's Visual Studio and CLR releases. Whenever a big ship docks in the port, may it be Office, SQL, BizTalk, Windows Server, or Windows Client, and as more and more of the still-unmanaged Win32/Win64 surface area gets wrapped, augmented or replaced by managed-code APIs over time and entirely new things are added, there might be bits that fit into and update the Framework.  

So one sane way to think about the .NET Framework version number is that it merely labels the overall package and not the individual assemblies and components included within it. Up to 2.0 everything was pretty synchronized, but given the ever-increasing scale of the thing, it's good to think of that being a lucky (even if intended) coindicence of scheduling. This surely isn't the first time that packages were versioned independently of their components. There was and is no reason for the ASP.NET team to gratuitously recompile their existing bits with a new version number just to have the GAC look pretty and to create the illusion that everything is new - and to break Visual Studio compatibility in the process.

Of course, once we cover 100% of the Win32 surface area, we can rename it all into WinFX again ;-)  (just kidding)

[All the usual "personal opinion" disclaimers apply to this post]

Update: Removed reference to "Win64".

Categories: IT Strategy | Technology | ASP.NET | Avalon | CLR | Indigo | Longhorn | WCF | Windows

I've been quoted as to have said so at TechEd and I'll happily repeat it: "XML is the assembly language of Web 2.0", even though some (and likely some more) disagree. James Speer writes "Besides, Assembly Language is hard, XML isn’t." , which I have to disagree with.

True, throwing together some angle brackets isn't the hardest thing in the world, but beating things into the right shape is hard and probably even harder than in assembly. Yes, one can totally, after immersing oneself in the intricacies of Schema, write complex types and ponder for days and months about the right use of attributes and elements. It's absolutely within reach for a WSDL zealot to code up messages, portTypes and operations by hand. But please, if you think that's the right way to do things, I also demand that you write and apply your security policy in angle bracket notation from the top of your head and generate WCF config from that using svcutil instead of just throwing a binding together, because XML is so easy. Oh? Too hard? Well, it turns out that except for our developers and testers who are focusing on getting these mappings right, nobody on our product team would probably ever even want to try writing such a beast by hand for any code that sits above the deep-down guts of our stack. This isn't the fault of the specifications (or people here being ignorant), but it's a function of security being hard and the related metadata being complex. Similar things, even though the complexity isn't quite as extreme there, can be said about the other extensions to the policy framework such as WS-RM Policy or those for WS-AT.

As we're getting to the point where full range of functionality covered by WS-* specifications is due to hit the mainstream by us releasing WCF and our valued competitors releasing their respective implementations, hand-crafted contracts will become increasingly meaningless, because it's beyond the capacity of anyone whose job it is to build solutions for their customers to write complete set of contracts that not only ensures simple data interop but also protocol interop. Just as there were days that all you needed was assembly and INT21h to write a DOS program (yikes) or knowledge of "C" alongside stdio.h and fellows to write anything for everthing, things are changing now in the same way in Web Services land. Command of XSD and WSDL is no longer sufficient, all the other stuff is just as important to make things work.

Our WCF [DataContract] doesn't support attributes. That's a deliberate choice because we want to enforce simplicity and enhance interoperability of schemas. We put an abstraction over XSD and limit the control over it, because we want to simplify the stuff that goes across the wire. We certainly allow everyone to use the XmlSerializer with all of it's attribute based fine-grained control over schema, even though there are quite a few Schema constructs that even that doesn't support when building schema from such metadata. If you choose to, you can just ignore all of our serialization magic and fiddle with the XML Infoset outright and supply your own schema. However, XML and Schema are specifications that everyone and their dog wanted to get features into and Schema is hopelessly overengineered. Ever since we all (the industry, not only MS) boarded the SOAP/WS train, we're debating how to constrain the features of that monster to a reasonable subset that makes sense and the debate doesn't want to end.

James writes that he "take[s] a lot of care in terms of elements vs. attributes and mak[es] sure the structure of the XML is business-document-like", which only really makes sense if XML documents used in WS scenarios were meant for immediate human consumption, which they're not.

We want to promote a model that is simple and consistent to serialize to and from on any platform and that things like the differentiation between attributes and elements doesn't stand in the way of allowing a 1:1 mapping into alternate, non-XML serialization formats such as JSON or what-have-you (most of which don't care about that sort of differentiation).  James' statement about "business-document-like" structures is also interesting considering EDIFACT, X.12 or SWIFT, all of which only know records, fields and values, and don't care about that sort of subtle element/attribute differentation, either. (Yes, no of those might be "hip" any more, but they are implemented and power a considerable chunk of the world economy's data exchange).

By now, XML is the foundation for everything that happens on the web, and I surely don't want to have it go away. But have arrived at the point where matters have gotten so complicated that a layer of abstraction over pretty much all things XML has become a necessity for everyone who makes their money building customer solutions and not by teaching or writing about XML. In my last session at TechEd, I asked a room of about 200 people "Who of you hand-writes XSLT transforms?" 4 hands. "Who of you used to hand-write XSLT transforms?" 40+ hands. I think it's safe to assume that a bunch of those folks who have sworn off masochism and no longer hand-code XSLT are now using tools like the BizTalk Mapper or Altova's MapForce, which means that XSL/T is alive and kicking, but only downstairs in the basement. However, the abstractions that these tools provide also allow bypassing XSLT altogether and generate the transformation logic straight into compiled C++, Java, or C# code, which is what MapForce offers. WSDL is already walking down that path.

Categories: TechEd US | Indigo | WCF | Web Services

My first of two sessions this week here at TechEd is on Thursday, at 2:45pm in room 153ABC on "Designing Bindings and Contracts".

I realize that the title sounds a bit abstract and a different way to put this would be "How to choose the correct bindings and what to consider about contracts in a variety of architectual scenarios", but that would have been a bit long as a title. in the talk I'll explain the system-defined bindings that we ship in the product so that we've got stuff to work with and then I'll get out the tablet pen and draw up a bunch of scenarios and how our bindings (read: communication options) make sense in those. What's the best choice for N-Tier inside and outside of the corporate perimeter, what do you do for queueing-style apps, how do you implement volatile or durable 1:1 pub/sub, how do you implement broadcasts and where do they make sense, etc.

Categories: Architecture | Indigo | WCF

We've just released the "Windows Communication Foundation RSS Toolkit" on our new community site. This toolkit, which comes with complete source code, illustrates how to expose ATOM and RSS feeds through WCF endpoints. I will discuss the toolkit in my session CON339, Room 107ABC, Friday 10:45am here at TechEd.

Categories: TechEd US | Indigo | WCF

Just so that you know: In addition to the regular breakout sessions, we have a number of interactive chalk talks scheduled here at the Connected Systems Technical Learning Center in the Expo Hall. Come by.

Categories: TechEd US | Technology | Indigo | WCF | Workflow

June 12, 2006
@ 12:48 PM

This is my first TechEd! - as a Microsoft employee. It's of course not my first tech event in my new job (Egypt, Jordan, UK, France, Switzerland, Holland, Belgium, Denmark, Las Vegas/USA, Slovenia, and Israel are on the year-to-date list - on top of three long-distance commutes to Redmond), but the big TechEds are always special. It'll be fun. Come by the Connected Systems area in the exhibition hall and find me to chat if you are here in Boston.

Frankly, I didn't expect a Sunday night keynote to be nearly as well attended as it was, but it looks that experiment mostly worked. The theme of the keynote were Microsoft's 4 Core Promises for IT Pros and Developers nicely wrapped into a video story based on the TV show "24" and with that show's IT superwoman Chloe O'Brian (actress Mary Lynn Rajskub) up on stage with Bob Muglia (our team's VP far up above in my chain of command), who acted as the MC for the show. Finally we got an apology from a Hollywood character for all the IT idiocy the put up on screen. Thanks, Chloe.

Our team has a lot of very cool stuff to talk about at this show. The first highlight is John Justice's WCF Intro talk (Session CON208, Room 157ABC) today at 5:00pm with a "meet the team" panel Q&A session at the end. Block the time.

Categories: Technology | Indigo | WCF

Late last night, my colleague James Conard, who has worked and worked and worked tirelessly on this for the past few months and has shown great patience with a big group of people pulling into all sorts of directions as we got this together has flipped the switch to turn on the new .NET Framework 3.0 community portal family at netfx3.com

The new Windows Communication Foundation community home is at http://wcf.netfx3.com and it's a great improvement over the small, hastily-thown-together site that we used to have. There'll be a number of news bits and announcements throughout and after TechEd at the new site, so it might be a good idea to subscribe to the feed now. 

My official "Welcome!" post over on the new site is here, the James' site-wide welcome message can be found here.

Categories: Indigo | WCF

My PM colleague Nicholas Allen is certainly on my list for "best blogging newcomer of 2006".  He started in February, got hooked, and I am not sure whether he actually did leave the keyboard since then.

Nicholas just started a blog series that explains the system-defined (formely known as: standard-) bindings that we ship with WCF. He's got three of them explained now and my guess is that there are more to follow:

While you are there, make sure to subscribe to Nicholas' feed and also take a look around and look at earlier posts. His channel category is a gold mine and the same can be said of the transports and ... everything there is fabulous stuff.

Categories: Indigo | WCF

Christian Weyer stars in a new episode of the German dotnetproTV series and masterfully explains the Windows Communication Foundation. If you don't understand German, you may still enjoy Christian's flip-chart skills and overall good looks. ;-)

Christian Weyer – Microsoft Regional Director und allgemein anerkannter und geschätzter Web Services Erklärbar – ist der  Star der neuesten dotnetproTV Episode zum Thema Windows Communication Foundation. Ich habe mir die Episode gerade angesehen und … Holla die Waldfee! … das ist einer der besten Überblicke zu WCF, die ich bisher gesehen habe! Und der Dialog mit Ralf Westphal ist natürlich kurzweilig und interessant wie immer. Hut ab!

Und weil mir das Thema natürlich am Herzen liegt bin ich sehr froh, daß dotnetpro für diese Folge nicht nur einen „Teaser“ zur Verfügung stellt, sondern Christians ganze Show in der ganzen 370MB großen Herrlichkeit (der Link zum Video ist in der orangefarbenen Kiste hier auf der Seite). Runterladen! Gucken!

Categories: Indigo | WCF

(via http://windowscommunication.net)

The WCF Documentation Team has started to release biweekly (!) documentation updates. The updates are made available as a set of .CHM files.

Mind that these files do not integrate directly into Visual Studio as the WinFX Windows SDK files do. Since VS integration requires quite a bit of setup work, the VS integrated help files can only ship with the regular WinFX Windows SDK CTPs. Nevertheless, the feedback from all customers we asked told us loud and clear that we should ship the documentation in this form irrespective of this minor usability inconvenience and therefore we do.

If you have feedback on the documentation, please use the "Send comments about this topic to Microsoft" email links below each documentation entry to provide feedback. Due to the volume that our team receives, you might not always get an answer, but your input is most definitely read and considered.

You can download the first (April 15) Documentation CTP directly using this link (20MB).

Categories: Indigo

April 9, 2006
@ 08:57 AM

Matias Woloski writes about ClickOnce and WCF and provides a complete solution path for setting it up and also talks about our "Full Trust" constraint that I explained a few weeks ago.

Categories: Indigo

My grand boss ... if someone had told me this a year back ... but it turns out that it is a great blessing ... anyways .... My grand boss, the magnificient Doug Purdy points to our best kept secret: You can actually do Remoting-style distributed objects with WCF as Sowmy and Michael explain.

Update: Tomas Restrepo asks why that is good. Let me clarify: I think the transparent, distributed objects way of doing things is very problematic, but there are some scenarios where they are a feasible solution and there are migration scenarios where you don't have much of a choice. As a platform provider, we have a mainstream path (SO) that we prefer and that's represented in our turnkey scenarios, but we cannot and will not be as dogmatic as to shut the door on different architecture styles. We don't do that on REST/POX on one side and we don't do that on distributed objects on the other side of the spectrum.

Categories: Indigo

April 8, 2006
@ 11:09 AM

The WinFX Tour is coming to Europe!

Mark it in your calendar and, if you can, sign up! Locations: Rotterdam (20 Apr), Nice (25 Apr), Zurich (2 May), Copenhagen (4 May), London (9 May), Eilat/IL (9 May), Reading/UK (10 May), Cairo (15 May), Moscow (19 May)

I'll be speaking at the Zurich, Copenhagen, and Eilat (TechEd Israel) events.

[If the event near you does not have a sign-up page linked, watch your local MSDN portal or MSDN newsletters for updates]

Categories: Talks | Indigo

I am sure that some want to fly under our radar, but I am also sure that a lot of people are very interested to have a bit fat green spot showing up on our radar screen when it comes to their blogs posts. Well, if you look here ... everyone who left a comment on that post is on my blogroll in RSS Bandit and I am making every interesting and original post/thought/article visible internally to make sure that your wishes/concerns/praise are heard and your contributions to the community are acknowledged.

PS: Did I mention that I am involved in the MVP approval process? ;-)
PS: Identity (InfoCard, Active Directory, MIIS), Workflow and BizTalk gurus are welcome too. I will get your feed addresses to the right folks.

Categories: Blog | Indigo

April 5, 2006
@ 01:23 PM

Pablo Cibraro (who just received the Connected Systems Developer MVP award; Congratulations!) has built a compression channel for WCF.

Categories: Indigo

April 5, 2006
@ 12:52 PM

Blogland is big. I am currently trying to get a bit of an overview what people out in blogland are doing with WCF. And while I've been doing that in addition to a bunch of very long and (due to the time difference between Redmond and Germany) very late evening meetings, Sabine has caught the Sudoku virus and keeps filling those grids ...

It turns out, there is convergence between WCF and Sudoku. ;-)

I have seen a few people pointing it out already, but in case you haven't seen Kumar Gaurav Khanna's WS-Sudoku (blog post) game, you might want to take a look. It's ClickOnce installable (given you have the WinFX Feb CTP) and lets a group of people solve a puzzle together. Very nice demo.

Categories: Indigo

April 3, 2006
@ 03:39 PM

Mark, I care deeply about the hobbyist who writes some code on the side, the programmer who works from 9-5 and has a life and just as deeply about those who work 24/7 and about everybody in between ;-)

That said: now that we're getting close to being done with the "this vs. that" debate, we can most certainly figure out the "how can we optimize the programming experience" story. For very many people I've talked to in the past 4 years or so, reducing complexity is an important thing. I firmly believe that we can do enterprise messaging and Web-Style/Lo-REST/POX with a single technology stack that scales up and down in terms of its capabilities.  

Since I take that you are worried about code-bloat on the app-level, how would you think about the following client-side one-liners?

  • T data = Pox.Get<T>("myCfg")
  • T data = Pox.Get<T>("myCfg", new Uri("/customer/8929", UriKind.Relative));
  • T data = Pox.Get<T>("myCfg", new Uri("http: //example.com/customer/8929"));
  • T data = Pox.Get<T>(new Uri("http: //example.com/customer/8929"));
  • U reply = Pox.Put<T,U>( new Uri("http: //example.com/customer/8929"), data, ref location));
  • U reply = Pox.Post<T,U>( new Uri("http: //example.com/customer/"), data, out location));
  • Pox.Delete(settings, new Uri("http: //example.com/customer/8929"));

Whereby "myCfg" refers to a set of config to specify security, proxies, and so forth; settings would refer to an in-memory object with the same reusable info. Our stack lets me code that sort of developer experience in a quite straightforward fashion and I can throw SOAPish WS-Transfer under it and make the call flow on a reliable, routed TCP session with binary encoding without changing the least bit.

If I am still missing your point in terms of ease of use and line count, make a wish, Mark. :-)

Categories: Indigo | Web Services | XML

One of the things I've learned quickly on our team is that the customer is everything. That's not a marketing phrase but the literal truth. Up until January 31st I had great personal powers to cause 11th hour changes in WCF and since the day I joined up that power is gradually diminishing. That's not only because we're edging closer to RTM, but also because it's more difficult to fill in the "business justification" column for design changes that I would want to propose. There is a tension between "more and better features" and "shipping" and of course also a huge difference between a customer legitimately saying "I don't like that behavior" and "umm, so how do we make Rocky happy?".

It turns out that you (yes, you) have two very easy ways to make your suggestions heard and quite directly contribute to our product planning and to file bugs on things that don't work, things that you consider to be behaving in a strange way or stuff you plainly don't like or consider to be missing.

So if you think that we should have a [PatHelland] attribute that constrains the behavior of a WCF service to the exact guidance along the lines of Pat's Fiefdoms and Emissaries or Metropolis models we'd love to hear about it. (Even though the reply to that exact feature request would probably be an explanation of how to build that on top of the WCF extensbility model - you can actually build that attribute today ;-)

1. The MSDN Forum. The forum is the place where all Program Managers on our team listen. We get a daily report of unanswered questions and we have an internal website where we can manage and assign the questions. So your questions do in fact land in our inboxes.

2. The MSDN Product Feedback Center. You can file bugs straight into our internal product database (called "Product Studio"). That tool is the most powerful way for anyone to submit bugs and feature requests. Whatever goes into the feedback center is an actual, unresolved product bug until it's been on the table and has been given serious consideration. We currently only have a tiny little number of bugs from the product feedback center in the database and we are of the humble opinion that we can't be that good ;-)

Filing bugs and suggesting features is always welcome. You shouldn't be worrying that we have a cutoff point for features at some point before RTM. That's our thing to do. There is always a next version and planning for that has actually started.

Categories: Indigo

The fabulous Ed Pinto has blogged about out breaking changes for the February CTP. Exhaustive list here.

Categories: Indigo

February 22, 2006
@ 05:37 PM

The WinFX Runtime Components February CTP and the SDK and the VS extensions that go with them just hit the download sites. Go get it:

Categories: Avalon | Indigo

I just got a comment from Oran about the lack of durable messaging in WCF and the need for a respective extensibility point. Well... the thing is: Durable messaging is there; use the MSMQ bindings. One of the obvious "problems" with durable messaging that's only based on WS-ReliableMessaging is that that spec (intentionally) does not make any assertions about the behavior of the respective endpoints.

There is no rule saying: "the received message MUST be written do disk". WS-ReliableMessaging is as reliable (and unreliable in case of very long-lasting network failures or an endpoint outright crashing) and plays the same role as TCP. The mapping is actually pretty straightforward like this: WS-Addressing = IP, WS-ReliableMessaging = TCP.

So if you do durable messaging on one end and the other end doesn't do it, the sum of the gained reliability doesn't add up to anything more than it was before. MSMQ is fully in control of both ends of the wire and makes assertions about the endpoint behavior and was therefore the logical choice for our durable messaging strategy in V1, because it already ships with Windows and there is (as of yet) no agreed interoperable set of behavioral assertions for WS-RM around how endpoints must deal with received messages except ACKing them.

See Shy's comments.

Categories: Indigo | MSMQ

February 20, 2006
@ 07:00 PM

Just read this on Robert Hurlbut's blog (via Dominick, source is Doug)

As Doug indicates, the issue here is not "we don't want to do it", but that we need to ship. 

The problem is that partial trust is incredibly hard (and very time consuming) to test for a communication platform that is supposed to have rock solid security (no paradoxon here) and shall perform well. It's just as hard to provide meaningful exceptions (and -messages) in case we'd stumble into a CAS exception. You wouldn't want us to just bubble up some aribtrary security exception, but instead will want us tell you what's causing the problem and how you could fix it. There are (give or take some) 20 base permissions in the framework, most of them allow parameterization, and the system is extensible with custom permissions as well. You can do the math for where that takes you in terms of required combinations and test cases for achieving satisfying test coverage across the whole of Indigo, let alone all the special casing in the actual product code-base.

I wonder how many applications written to support partial trust actually take that complexity into account in their test strategy (hint, hint) ;-)

That said, I will clarify once more that this doesn't mean "we will never do that". It's just not possible to fit this into our V1 schedule in a way that we and you would find the outcome acceptable. 

Categories: Indigo

February 4, 2006
@ 10:37 AM

If you have a blog and you post stuff around WCF/Indigo and you think that I don't have you in my aggregator, please post a comment below with your blog URL. And it totally doesn't matter whether you blog in English, Italian, French, Spanish, Dutch, German, Arabic, Chinese, Russian, or any other language ... I want to know.

Categories: Indigo

Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8
If you are in a hurry and all you want is the code, scroll down to the bottom. ;)
And for the few who are still reading: This is the 9th and final part of this series, which turned out to be a little bigger than planned. And if you read all parts up to here, you have meanwhile figured out that the extensions that I have presented here are not only about REST and POX but primarily a demonstration of how customizable Indigo is for your own needs.  Indigo – Indigo was really a cool code-name. Just like many folks on the Indigo WCF team it’s a bit difficult for me to trade it for a clunky moniker like WCF, and it seems that this somewhat reflects public opinion. Much less am I inclined to really spell out “Windows Communication Foundation” in presentations, because that doesn’t exactly roll off the tongue like a poem, does it? But, hey, there’s always the namespace name and that doesn’t suck. “Service Model” is what everyone will be using in code. We don’t need three-letter acronyms, or do we?
But I digress. I’ve explained a complete set of extensions that outfit the service model with the ability to receive and respond to HTTP requests with arbitrary payloads and without SOAP envelopes and do so by dispatching to request handlers (method) by matching the request URI and the HTTP method against metadata that we stick on the methods using attributes.
And now I’ll show you the “Hello World!” for the extensions and about the simplest thing I can think of is a web server ;-)  In fact, you already have the complete configuration file for this sample; I showed you that in Part 8.
Since things like “Hello World!” are supposed to be simple and it’s ok not to make that a full coverage test case, we start with the following, very plain contract:

[ServiceContract, HttpMethodOperationSelector]
interface IMyWebServer
{
    [OperationContract, HttpMethod("GET", UriSuffix = "/*")]
    Message Get(Message msg);       
    [OperationContract(Action = "*")]
    Message UnknownMessage(Message msg);
}

By now that should not need much explanation. The Get() method receives all HTTP GET requests on the URI suffix “/*” whereby “*” is a wildcard. In other words, all GET requests go to that method.
The implementation of the service is pretty simple.  I am implementing it as a singleton service that is constructed passing a directory name (path).
The Get() implementation gets the URI of the incoming request from the To header of the message’s Headers collection. (Note that the HTTP transport maps the incoming request’s absolute URI to that header once the encoder has constructed the message.)  

[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)]
public class MyWebServer : IMyWebServer
{
    string directory;

    public MyWebServer(string directory)
    {
        this.directory = directory;
    }

    public Message Get(Message msg)
    {
        string requestPath = msg.Headers.To.AbsolutePath;
        // get the path
        if (requestPath.Length > 0 && requestPath[0] == '/')
        {
            // if the path is just the "/", append "default.htm" as the
            // default page.
            if (requestPath.Substring(1).Length == 0)
            {
                requestPath = "/default.htm";
            }
            // otherwise check whether a file by the requested name exists
            string filePath = Path.Combine(directory, requestPath.Substring(1).Replace('/', '\\'));
            if (File.Exists(filePath))
            {
                // and return a file message
                return PoxMessages.CreateFileReplyMessage(filePath, PoxMessages.ReplyOptions.None);
            }
        }
        // if all fails, send a 404
        return PoxMessages.CreateErrorMessage(HttpStatusCode.NotFound);
    }

   
    public Message UnknownMessage(Message msg)
    {
        return PoxMessages.CreateErrorMessage(HttpStatusCode.NotImplemented);
    }
}

Once we’ve done a few checks on the URIs path portion, we construct a file path from the base directory and the URI path, check whether such a file exists, and if it does we create a message for that file and return it. If we can’t find the resulting file name, we construct a 404 “not found” error message. The UnknownMessage() method receives all requests with HTTP methods other than GET and appropriately returns a 501 “not implemented” message. Web server done.
Well, ok. The actual messages are constructed in a helper class PoxMessages that aids in constructing the most common reply messages. The class is part of the extension assembly code you can download and therefore I just quote the relevant methods that are used above. We’ll start with PoxMessages.CreateErrorMessage(), because that its very simple:

public static Message CreateErrorMessage(System.Net.HttpStatusCode code)
{
   Message reply = Message.CreateMessage("urn:reply");
   HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty();
   responseProperty.StatusCode = code;
   reply.Properties.Add(HttpResponseMessageProperty.Name, responseProperty);
   return reply;
}

We create a plain, empty service model Message, create an HttpResponseMessageProperty instance, set the StatusCode to the status code we want and add the property to the message. Return, done.
The PoxMessages.CreateFileReplyMessage() method is a bit more complex, because it, well, involves opening files. I am not showing you the exact overload that’s used in the above example but the one that’s being delegated to:  

public static Message CreateFileReplyMessage(string fileName, long rangeOffset, long rangeLength, ReplyOptions options)
{
   string contentType = GetContentTypeFromFileName(fileName);
   try
   {
      FileStream fileStream = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read);
      if (rangeOffset != -1 && rangeLength != -1)
      {
         SegmentStream segmentStream = new SegmentStream(fileStream, rangeOffset, rangeLength, true);
         return PoxMessages.CreateRawReplyMessage(segmentStream, contentType, rangeOffset, rangeLength, fileStream.Length, Path.GetFileName(fileName), options);
      }
      else
      {
         return PoxMessages.CreateRawReplyMessage(fileStream, contentType, Path.GetFileName(fileName), options);
      }
   }
   catch
   {
      return PoxMessages.CreateNotFoundMessage();
   }
}

The implementation will first make a guess for the file’s content-type based on the file-name, which is a simple registry lookup with a fallback to application/octet-stream.  Then it’ll try to open the file using a FileStream object. If that works –  ignoring the special case with rangeOffset/rangeLength being set – we delegate to the CreateRawReplyMessage() method:

public static Message CreateRawReplyMessage(Stream stm, string contentType, long rangeOffset, long rangeLength, long totalLength, string streamName, ReplyOptions options)
{
   PoxStreamedMessage reply = new PoxStreamedMessage(stm, 16384);
   HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty();
   if ((options & ReplyOptions.ContentDisposition) == ReplyOptions.ContentDisposition)
   {
      responseProperty.Headers.Add("Content-Disposition", String.Format("Content-Disposition: attachment; filename=\"{0}\"", streamName));
   }
    responseProperty.Headers.Add("Content-Type", contentType);
   if (rangeOffset != -1 && rangeLength != -1)
   {
      responseProperty.StatusCode = System.Net.HttpStatusCode.PartialContent;
      responseProperty.Headers.Add("Content-Range", String.Format("bytes {0}-{1}/{2}",rangeOffset,rangeOffset+rangeLength,totalLength));
      responseProperty.Headers.Add("Content-Length", rangeLength.ToString());
   }
   else
   {
        if ((options & ReplyOptions.AcceptRange) == ReplyOptions.AcceptRange)
        {
            responseProperty.Headers.Add("Content-Range", String.Format("bytes {0}-{1}/{2}", 0, totalLength-1, totalLength));
        }
        responseProperty.Headers.Add("Content-Length", totalLength.ToString());
   }
   if ((options & ReplyOptions.NoCache) == ReplyOptions.NoCache)
   {
      responseProperty.Headers.Add("Cache-Control", "no-cache");
      responseProperty.Headers.Add("Expires", "-1");
   }
   if ((options & ReplyOptions.AcceptRange) == ReplyOptions.AcceptRange)
   {
      responseProperty.Headers.Add("Accept-Ranges", "bytes");
   }
    reply.Properties.Add(HttpResponseMessageProperty.Name, responseProperty);
   reply.Properties.Add(PoxEncoderMessageProperty.Name, new PoxEncoderMessageProperty(true));
   return reply;
}

That method takes the stream, wraps it in our PoxStreamedMessage, sets all desired HTTP headers on the HttpResponseMessageProperty, adds the property to the message and lastly adds the PoxEncoderMessageProperty indicating that we want the encoder to operate in raw binary mode. However, all these helper methods are already part of the library and therefore the application code doesn’t really have to deal with all of that anymore. You just construct the fitting message, stick the content into it and return it.
So now we have a service class and what’s left to do is to host it. For that we need a simple service host with a tiny little twist. Since the ServiceMetadataBehavior that typically gives you the WSDL file and the service information page would conflict with our direct interaction with HTTP, we need to switch it off. We do that by removing it from the list of behaviors before the service is initialized.

public class MyWebServerHost : ServiceHost
{
    public MyWebServerHost(object instance)
        : base(instance)
    {
    }

    protected override void OnInitialize()
    {
        Description.Behaviors.Remove<ServiceMetadataBehavior>();
        base.OnInitialize();
    }
}


class Program
{
    static void Main(string[] args)
    {
        string directoryName = args[0];

        Console.WriteLine("LittleIndigoWebServer");
        if (args.Length == 0)
        {
            Console.WriteLine("Usage: LittleIndigoWebServer.exe [root path]");
            return;
        }
       
        if (!Directory.Exists(directoryName))
        {
            Console.WriteLine("Directory '{0}' does not exist");
            return;
        }

        Console.WriteLine("Web server starting.");

        using (MyWebServerHost host = new MyWebServerHost(new MyWebServer(directoryName)))
        {
            host.Open();
            Console.WriteLine("Web Server running. Press ENTER to quit.");
            Console.ReadLine();
            host.Close();
        }
    }
}

The rest is just normal business for hosting and setting up a service model service in a console application. We read the first argument from the command-line and assume that’s a directory name. We verify that that is indeed so, construct the service host passing the singleton new’ed up with the directory name. We open the service host and we have the web server listening. The details for how the service is exposed on the network is the job of configuration and binding and, again, exhaustively explained in Part 8.

Below is the downloadable archive that contains two C# projects. Newtelligence.ServiceModelExtensions contains the extension set and LittleIndigoWebServer is the above demo app. The code complies and works with the WinFX November and December CTPs.

If you have installed Visual Studio on drive C: you should be able to run the sample immediately with F5, since the LittleIndigoWebServer project’s debugging settings pass the .NET SDK directory to the application on startup. So if you have that, start the server, and then browse http://localhost:8020/StartHere.htm you get this:

Otherwise, you can just start the server using any directory of your choosing, preferably one with HTML content.

And that’s it. I am happy that I’ve got all of the stuff out. This is probably the most documentation I’ve ever written in one stretch for some public giveaway infrastructure, but I am sure it’s worth it. I will follow up with more examples using these extensions. For instance I will show to use this for actual POX apps (the web server is just spitting out raw data, after all) using RSS, OPML and ASX. Stay tuned.

Oh, and … if you like this stuff I’d be happy about comments, questions, blog mentions and, first and foremost, public examples of other people using this stuff. License is BSD: Use as you like, risk is all yours, mention the creators. Enjoy.

Download: newtelligence-WCFExtensions-20060901.zip

[Note: I am preparing an update with client-side support and a few bugfixes right now. Should be available before or on 2006-01-16]

Categories: Indigo

Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7

We’ve got all the moving pieces and what’s left is a way to configure the PoxEncoder into bindings and use those to hook a service up to the network and run it.

Bindings? Well, all Indigo (WCF) services need to know their ABC to function. ABC? I’ll quote from my WCF Introduction on MSDN:

“ABC” is the WCF mantra. “ABC” is the key to understanding how a WCF service endpoint is composed. Think Ernie, Bert, Cookie Monster or Big Bird. Remember "ABC".

·         "A" stands for Address: Where is the service?

·         "B" stands for Binding: How do I talk to the service?

·         "C" stands for Contract: What can the service do for me?

Web services zealots who read Web Service Description Language (WSDL) descriptions at the breakfast table will easily recognize these three concepts as the three levels of abstraction expressed in WSDL. So if you live in a world full of angle brackets, you can look at it this way:

·         "A" stands for Address—as expressed in the wsdl:service section and links wsdl:binding to a concrete service endpoint address.

·         "B" stands for Binding—as expressed in the wsdl:binding section and binds a wsdl:portType contract description to a concrete transport, an envelope format and associated policies.

·         "C" stands for Contract—as expressed in the wsdl:portType, wsdl:message and wsdl:type sections and describes types, messages, message exchange patterns and operations.

"ABC" means that writing (and configuring) a WCF service is always a three-step process:

·         You define a contract and implement it on a service

·         You choose or define a service binding that selects a transport along with quality of service, security and other options

·         You deploy an endpoint for the contract by binding it (using the binding definition, hence the name) to a network address.

A binding is a layered combination of a transport, an encoder and of any additional protocol channels (reliable session, transaction flow, etc.) that you’d like to assemble into a transport stack for exposing a service implementation on a specific endpoint address.

For exposing a HTTP-based RESTish service we need:

A.    The HTTP address to host the service at.

B.     Some binding configuration that tells Indigo’s HTTP transport to use our PoxEncoder.

C.     An implementation of a service-contract that’s configured (using the HttpMethodOperationSelectorSection config extension) or marked-up (with the HttpMethodOperationSelectorAttribute) to use our HttpMethodOperationSelectorBehavior for endpoint address filtering and selecting methods.

I’ve shown you quite a few contract variants in the first parts of this series and therefore I don’t really have to explain the C in too much detail anymore; except: While the A is just a plain HTTP address such as http://www.example.com/service, it’s interesting insofar as this address is, unlike as with SOAP services, really just the common address prefix for the dispatch URIs of the particular service and that there is a split between what is A and what is C.

As I’ve explained, the philosophy behind the contract design of my extensions is around the namespaces that are the basis for forming URIs. Because REST services aren’t simply using HTTP as a transport tunnel as SOAP services do, but rather leverage HTTP as the application protocol it is, the URI is a lot more than just a drop-off point for messages. With REST services, the URI is an expression that has both, addressing (transport) and contract (dispatch) aspects to it and we need to separate those out. A clear distinction between global and local namespaces allows us to do that (And I am going into a bit more detail than I usually would, to further address an objection of Mark Baker, 1st comment, on my choice of the programming model):

There is a global namespace that’s managed by the global DNS system of which anyone can reserve a chunk for themselves by registering a domain-name. The domain-name provides a self-manageable namespace root of which sub-namespaces (subdomains) can be derived and allocated to specific hosts/services or groups of hosts/services by the domain owners. On the particular host, you can put an application behind a specific port, which might either the default port of your particular application protocol or – diverging from the protocol standard – some other port of your choosing. Each Internet application deployment has therefore at least one unique mapping into this global namespace system.

Any further segmentation of the namespace except the host-name and the port-number are private matters of the application listing to that endpoint. With Indigo self-hosted HTTP services, the listening application is Windows (!) – more precisely it’s the HTTP.SYS kernel listener. For IIS/WAS hosted services, the listening application is IIS (for IIS 5.1 and below) or, again, Windows – through that very listener. At the HTTP.SYS listener, handler processes can register their interest in requests sent to certain sub-namespaces of the global namespace mapping (host/port), which are identified by relative URIs. To be clear: The HTTP.SYS API indeed requires the caller to provide an absolute URI like http://hostname:port/service, but the two main parts (scheme/host/port and path) of that URI are used for different purposes:

·         Global mapping: The hostname and port are used to establish a new listener on that particular port (if there is already a listener it is shared) and to populate the hostname-filter table that’s used to disambiguate requests by the Host header in case the IP address is mapped to multiple DNS host entries.

·         Local mapping: The path information of the URI (the remaining relative URI with scheme, hostname and port stripped) is used as a prefix-matching expression to figure out which handler process shall receive the request and, inside that handler process, to further identify and invoke the appropriate endpoint and handler that deals with the resource that the complete URI path represents.

As indicated, mapping URIs to an endpoint needs to distinguish between how we segment and map a local namespace and how we hook that into the global namespace. Hence, the root for any absolute URIs that’s establishing the mapping into the global namespace shall always be separate from the code and reside in configuration for reasons of flexibility as Mark was rightfully pointing out in his objection, while the shape and mapping of the local namespace is typically very application and use-case specific and might well be partially or entirely hardwired.

·         The Indigo A(ddress) of a REST service implemented with my particular programming model is used to hook a given service (or resource representation manager, if you like) into the global namespace: http://myservice.example.com/. Only to be pragmatic and to allow multiple such services to locally share a particular hostname and port and indeed only as an alternative and workaround to creating a separate DNS entry for each service, that mapping might include a path prefix allowing the local low-level infrastructure to demultiplex requests sent to the same global namespace mapping: http://myservices.example.com/serviceA and http://myservices.example.com/serviceB.

·         The Indigo C(ontract) of a REST service implemented with my particular programming model is used to define the shape of the local namespace that the service owns and which is used to provide access to the representations of the resource-types the service is responsible for.

The following configuration snippet for a simple web-server based on my extensions is illustrating that split:

<services>
   <
service type="LittleIndigoWebServer.MyWebServer">
      <
endpoint contract="LittleIndigoWebServer.IMyWebServer"
                address="http://localhost:8020/"
                
binding="customBinding"
                
bindingConfiguration="poxBinding"/>
   </
service>
</
services>

The address http://localhost:8020/ is how I map the service into the global addressing namespace. The local namespace shape for that particular service is a defined by the layout of the file-system directory from which the service grabs files and returns them. What? You can’t see the directory structure and the resulting URLs from the above mapping? Of course not. It’s a private matter of the service implementation what that the local namespace structure is and it’s up to me what parts I am exposing. If I am nice enough I will give you something on a GET/HEAD request on the root of my local namespace (= global address without any suffix), and if I am not nice you get a 404 and will just have to know what to ask for. The “will have to know” part is contract. It’s an assurance that if you come looking at a particular place in my namespace you will have access to a particular thing. My [HttpMethod] attributes manifest that assurance on Indigo contracts.

That leaves B. Before I got carried away by A and C, I wrote [now a bit annotated] “A binding is a layered combination of a transport, an encoder and of any additional protocol channels (reliable session, transaction flow, etc.) that you’d like to assemble into a transport stack for exposing a service implementation [C] on a specific endpoint address [A].”

Putting together such a binding is not much more work than putting a little text between angle brackets and quotation marks in config as shown in the following snippet:

<customBinding>
   <
binding name="poxBinding">
      <
poxEncoder/>
      <
httpTransport mapAddressingHeadersToHttpHeaders="true"
           
maxMessageSize="2048000" maxBufferSize="2048000" manualAddressing="true"
          
authenticationScheme="Anonymous" transferMode="StreamedResponse"  />
   </
binding>
</
customBinding>

I am building a custom binding that’s combining the HTTP transport with a custom binding element config extension I built for the PoxEncoder. It’s that simple. And adding the binding element extension does not require black magic, either. It’s just another XML snippet that maps the extension class to an element name (“poxEncoder”) as you can see in the extensions section of the complete config file:

 <?xml version=1.0 encoding=utf-8 ?>
<
configuration>
   <
system.serviceModel>
      <
extensions>
         <
bindingElementExtensions>
            <
add name="poxEncoder" type="newtelligence.ServiceModelExtensions.PoxEncoderBindingExtension, newtelligence.ServiceModelExtensions"/>
         </
bindingElementExtensions>
      </
extensions>
      <
bindings>
         <
customBinding>
        <
binding name="poxBinding">
               <
poxEncoder/>
               <
httpTransport mapAddressingHeadersToHttpHeaders="true"
                           
maxMessageSize="2048000" maxBufferSize="2048000" manualAddressing="true"
                           
authenticationScheme="Anonymous" transferMode="Streamed"  />
            </
binding>
         </
customBinding>
      </
bindings>
      <
services>
         <
service type="LittleIndigoWebServer.MyWebServer">
            <
endpoint contract="LittleIndigoWebServer.IMyWebServer"
                    address="http://localhost:8020/"
                    
binding="customBinding"
                    
bindingConfiguration="poxBinding"/>
         </
service>
      </
services>
  </
system.serviceModel>
</
configuration>

The PoxEncoderBindingExtension is a class that is based on System.ServiceModel.Configuration.BindingElementExtensionSection. Whenever the configuration is processed by Indigo, the presence of the “poxEncoder” element in a binding triggers the creation of an instance of the class and if we’d require any configuration attributes (which we don’t), those would be stuffed into the Properties collection.    

using System;
using System.ServiceModel.Configuration;
using System.ServiceModel;
using System.Configuration;

namespace newtelligence.ServiceModelExtensions
{
   public class PoxEncoderBindingExtension : BindingElementExtensionSection
   {
        /// <summary>
        /// Initializes a new instance of the <see cref="T:PoxEncoderBindingExtension"/> class.
        /// </summary>
      public PoxEncoderBindingExtension()
      {
      }

        /// <summary>
        /// Creates the binding element.
        /// </summary>
        /// <returns></returns>
      protected override BindingElement CreateBindingElement()
      {
         PoxEncoderBindingElement pcc = new PoxEncoderBindingElement();
         return pcc;
      }

        /// <summary>
        /// Gets the type of the binding element.
        /// </summary>
        /// <value>The type of the binding element.</value>
      public override Type BindingElementType
      {
         get
         {
            return typeof(PoxEncoderBindingElement);
         }
      }

        /// <summary>
        /// Gets the name of the configured section.
        /// </summary>
        /// <value>The name of the configured section.</value>
      public override string ConfiguredSectionName
      {
         get
         {
            return "poxEncoder";
         }
      }

      private ConfigurationPropertyCollection properties;
        /// <summary>
        /// Gets the collection of properties.
        /// </summary>
        /// <value></value>
        /// <returns>The <see cref="T:System.Configuration.ConfigurationPropertyCollection"></see> collection of properties for the element.</returns>
      protected override ConfigurationPropertyCollection Properties
      {
         get
         {
            if (this.properties == null)
            {
               ConfigurationPropertyCollection configProperties = new ConfigurationPropertyCollection();
               this.properties = configProperties;
            }
            return this.properties;
         }
      }
   }
}

Once the configuration information has been read, the extension is asked to create a BindingElement from the acquired information. So these extensions are really just factories for binding elements. The binding element, which can also be used to compose such a binding in code by explicitly adding it to a System.ServiceModel.CustomBinding is shown below:

 using System;
using System.Collections.Generic;
using System.Text;
using System.ServiceModel;
using System.ServiceModel.Design;
using System.ServiceModel.Channels;

namespace newtelligence.ServiceModelExtensions
{
    public class PoxEncoderBindingElement : BindingElement, IMessageEncodingBindingElement
   {
        /// <summary>
        /// Clones this instance.
        /// </summary>
        /// <returns></returns>
      public override BindingElement Clone()
      {
         return new PoxEncoderBindingElement();
      }

        /// <summary>
        /// Creates the message encoder factory.
        /// </summary>
        /// <returns></returns>
      public MessageEncoderFactory CreateMessageEncoderFactory()
      {
         return new PoxEncoderFactory();
      }

        /// <summary>
        /// Gets the addressing version.
        /// </summary>
        /// <value>The addressing version.</value>
      public AddressingVersion AddressingVersion
      {
         get
         {
            return AddressingVersion.Addressing1;
         }
      }

        /// <summary>
        /// Gets the protection requirements.
        /// </summary>
        /// <returns></returns>
      public override System.ServiceModel.Security.Protocols.ChannelProtectionRequirements GetProtectionRequirements()
      {
         return null;
      }

        /// <summary>
        /// Builds the channel factory.
        /// </summary>
        /// <param name="context">The context.</param>
        /// <returns></returns>
      public override IChannelFactory BuildChannelFactory(ChannelBuildContext context)
      {
         if (context == null)
            throw new ArgumentNullException("context");

         context.UnhandledBindingElements.Add(this);
         return context.BuildInnerChannelFactory();
      }

        /// <summary>
        /// Builds the channel listener.
        /// </summary>
        /// <param name="context">The context.</param>
        /// <returns></returns>
        public override IChannelListener<TChannel> BuildChannelListener<TChannel>(ChannelBuildContext context)
        {
            if (context == null)
            throw new ArgumentNullException("context");

         context.UnhandledBindingElements.Add(this);
         return context.BuildInnerChannelListener<TChannel>();
      }
   }
}

Binding elements are typically used to put together client-side (channel factory) or service-side (channel listener) transport stacks. At the bottom is the transport and layered on top of it are security, reliable sessions, transaction flow and all other protocol features you need. Each protocol or feature on the channel/listener level has its own binding element and using those you configure yourself a binding combining the features you need and in the order that they should be applied.

The binding elements for message encoders are a bit different, because they are not contributing their own channel factories or channel listeners into the stack, but rather “only” supply the message encoder for the configured transport.

Whenever a binding is instantiated, Indigo creates a ChannelBuildContext which contains the sequence of the binding elements that shall be stacked onto each other into a channel or listener stack and starts stacking them from top to bottom by invoking the topmost binding element’s BuildChannelListener or BuildChannelFactory method. Once the binding element is done creating its channel factory or channel listener, it invokes BuildInnerChannel[Listener/Factory] on the context to have the binding element underneath do its work. (The context is also used to validate whether combination of the elements yields a functional binding stack, but I won’t go into that here).

Our binding element, however, won’t create a channel factory or listener, but rather put itself into the UnhandledBindingElements collection on the build context and will then just have the context complete the construction work. With putting itself into that collection, the binding element makes itself and its most irresistible feature (you’d also think that if you were an Indigo transport) – the IMessageEncodingBindingElement implementation – visible to the transport and waves its hand that it wants to be used. The transport’s binding element, which is at the bottom of the stack and therefore asked to build its channel factory/listener after our binding element has been invoked, will go look in the UnhandledBindingElements  collection whether a message encoding binding element is advertising itself for use. And if that’s so it will forget all of its defaults and happily embrace and use an encoder created by the factory returned by IMessageEncodingBindingElement.CreateMessageEncoderFactory, which is, in our case, this rather simple class:

 using System;
using System.Collections.Generic;
using System.Text;
using System.ServiceModel.Channels;
using System.ServiceModel;

namespace newtelligence.ServiceModelExtensions
{
    /// <summary>
    ///
    /// </summary>
   public class PoxEncoderFactory : MessageEncoderFactory
   {
      MessageEncoder encoder;

        /// <summary>
        /// Initializes a new instance of the <see cref="T:PoxEncoderFactory"/> class.
        /// </summary>
      public PoxEncoderFactory()
      {
          encoder = new PoxEncoder();

      }

         /// <summary>
        /// Gets the encoder.
        /// </summary>
        /// <value>The encoder.</value>
        public override MessageEncoder Encoder
      {
         get
         {
            return encoder;
         }
      }

        /// <summary>
        /// Gets the message version.
        /// </summary>
        /// <value>The message version.</value>
      public override MessageVersion MessageVersion
      {
         get
         {
            return encoder.MessageVersion;
         }
      }
   }
}

Soooooooo….!

If you had actually copied all those classes from Parts 1-8 down into local files and compiled them into an assembly, you’d have all my REST/POX plumbing code by now (except, admittedly, an application-level utility class that helps putting messages together).

But wait … don’t do that. In the next part(s) I’ll give you the code all packed up and ready to compile along with the little web server that we’ve configured here and will also share some code snippets from my TV app … maybe the RSS and ASX pieces?

Categories: Indigo

January 4, 2006
@ 09:21 PM

In case you are not following my Indigo REST/POX series, I quote one paragraph from today's Part 7 that is well worth to be quoted out of context. It talks about (SOAP-) messages and the misconception that a message is a small thing:

There’s no specification that says that you cannot stick 500 Terabyte or 500 Exabyte worth of data (think 365x24 live 1080i video streams) into a single message. As long as you have some reason to believe that the sender will eventually, in 20 years from now, give you “</soap:Body></soap:Envelope>” to terminate the message, the message can be assumed to be well-formed and complete.

The WCF transports that support "streamed" transfer-mode (all except MSMQ) all consider messages to be monsters like that when streaming is enabled. I have a bit more on the streaming mode in today's part of the series.

Categories: Indigo

Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8, Part 9

Where are we?

·         In Parts 1 and 2, I explained contracts in the REST/POX context and the dispatch mechanisms that we need to enable Indigo to accept and handle REST/POX requests. With that I introduced a metadata extension, the [HttpMethod] attribute that can be used to mark up operations on an Indigo contract with HTTP methods and a URI suffixes that we can dispatch on. I also showed how we can employ a parameter inspector to extract URI-embedded arguments and flow them to the operation in a message property.

·         In Parts 3 and 4, I showed how we use the [HttpMethodOperationSelector] attribute to replace Indigo’s default address filtering and operation selection mechanisms, basically the entire message dispatch mechanism, with our own variants. The SuffixFilter is used to find the appropriate endpoint for an incoming request and the HttpMethodOperationSelectorBehavior  find the operation (method) on that endpoint which shall receive the incoming request message.

·         In Parts 5 and 6, you saw how the PoxEncoder puts outbound envelope-less POX documents onto the wire in its WriteMessage methods and accepts incoming non-SOAP XML requests through its ReadMessage methods and wraps them with an in-memory envelope (“message”) for further processing. I also showed the PoxBase64XmlStreamReader, which is an XML Infoset wrapper for arbitrary binary streams that interacts with the PoxEncoder to allow smuggling any sort of raw binary content through the Indigo infrastructure and onto the wire.

We’re pretty far along already. We’ve got the dispatch mechanisms, we know how to hook the dispatch metadata into the services, we’ve got the wire-encoding – we have most of the core pieces together. In fact, the last two key classes we’re missing (configuration hooks aside) are two specialized message classes that we need to handle incoming requests. In Part 6, you could see that the two ReadMessage overloads of the PoxEncoder delegate all work to the PoxBufferedMessage for the “buffered” transfer-mode overload and to PoxStreamedMessage for the “streamed” transfer mode overload.

ReadMessage is called on an encoder whenever a transport has received a complete message buffer (buffered mode) or has accepted and opened a network stream (streamed mode).

Using streamed mode means very concretely that Indigo will start handling the message even though the message might not have completely arrived. A transport in streaming mode will only do as much as it needs to do in order to deal with the transport-level framing protocol. I use “framing protocol” as a general term for what is done at the transport level to know what the nature of the payload is and where the payload starts and ends. For HTTP, the HTTP transport figures out whether an incoming request is indeed an HTTP request, will read/parse the HTTP headers, and will then layer a stream over the request’s content, irrespective of whether the transfer of that byte sequence has already been completed. This stream is immediately handed off to the rest of the Indigo infrastructure and the transport has done its work by doing so.

Pulling the remaining bytes from that stream is someone else’s responsibility in streamed mode. Whenever a piece of the infrastructure pulls data directly or indirectly from the stream and the data chunk requested is still in transfer, the stream will block and wait until the data is there. The transport’s handling of the framing protocol will typically also take care of chunking and thus make a chunked stream appear to be continuous. When I say “indirect pull” I mean that it may very well be an XmlDictionaryReader layered over an XmlReader layered over the incoming network stream.

The streaming mode is of particular interest for very large messages that may, in an extreme case, be virtually limitless in size. There’s no specification that says that you cannot stick 500 Terabyte or 500 Exabyte worth of data (think 365x24 live 1080i video streams) into a single message. As long as you have some reason to believe that the sender will eventually, in 20 years from now, give you “</soap:Body></soap:Envelope>” to terminate the message, the message can be assumed to be well-formed and complete.

No matter whether you use buffered or streamed mode, the configured encoder’s ReadMessage method is the first place where the read data chunk or the stream goes and that delegates, as shown to our two message classes. So let’s look at them.

We’ll primarily look at the PoxBufferedMessage, which is constructed over the read message buffer in the PoxEncoder like this:

public override Message ReadMessage(ArraySegment<byte> buffer, BufferManager bufferManager)
{
   return new PoxBufferedMessage(buffer, bufferManager);
}

The class PoxBufferedMessage is derived from the abstract System.ServiceModel.Message class and implements the base-class’s abstract properties Headers, Properties, and Version and overrides the OnClose(), OnGetReaderAtBodyContents(), and OnWriteBodyContents() virtual methods. Internally, Indigo has several such Message implementations that are each customized for certain scenarios. Implementing own variants of Message is simply another extensibility mechanism that Indigo gives us.

Using System;
using System.Collections.Generic;
using System.Text;
using System.ServiceModel;
using System.IO;
using System.Xml;
using System.Runtime.CompilerServices;
using System.ServiceModel.Channels;

namespace newtelligence.ServiceModelExtensions
{
    /// <summary>
    /// This class is one of the message classes used by the <see cref="T:PoxEncoder"/>
    /// It serves to wrap an unencapsulated data buffer with a message structure.
    /// The data buffer becomes the body content of the message.
    /// </summary>
   public class PoxBufferedMessage : Message, IPoxRawBodyMessage
   {
      MessageHeaders headers = new MessageHeaders(MessageVersion.Soap11Addressing1);
      MessageProperties properties = new MessageProperties();
      byte[] buffer;
      int bufferSize;
      BufferManager bufferManager;
      Stream body;
       
        /// <summary>
        /// Initializes a new instance of the <see cref="T:PoxBufferedMessage"/> class.
        /// </summary>
        /// <param name="buffer">The buffer.</param>
      public PoxBufferedMessage(byte[] buffer)
      {
            bufferManager = null;
            buffer = buffer;
            bufferSize = buffer.Length;
      }

        /// <summary>
        /// Initializes a new instance of the <see cref="T:PoxBufferedMessage"/> class.
        /// </summary>
        /// <param name="buffer">The buffer.</param>
        /// <param name="bufferManager">The buffer manager.</param>
        public PoxBufferedMessage(ArraySegment<byte> buffer, BufferManager bufferManager)
        {
            bufferManager = bufferManager;
            bufferSize = buffer.Count;
            buffer = bufferManager.TakeBuffer( bufferSize);
            Array.Copy(buffer.Array, buffer.Offset, buffer, 0, bufferSize);
        }     

We can construct instances of the class over a raw byte array or an “array segment” layered over such an array. Array segments are preferred over raw arrays, because their use eases memory management. You can keep a pool of buffers with a common size, even though the actual content is shorter than the buffer size and probably even offset from the lower buffer boundary. If we get a raw byte array we simply adopt it, but if we get an array segment alongside a reference to a buffer manager we take a new buffer from the buffer manager and copy the array segment to that acquired buffer.

        /// <summary>
        /// Called when the message is being closed.
        /// </summary>
        protected override void OnClose()
        {
            base.OnClose();
            if ( bufferManager != null)
            {
                bufferManager.ReturnBuffer( buffer);
            }           
        }

When we close the message and we have acquired it using the buffer manager (which is signaled by the presence of the reference) we duly return it once the message is being closed (or disposed or finalized).

The next two methods are an implementation of the IPoxRawBodyMessage interface that is, you guessed it, defined in my extensions. If the handler method wants to get straight at the raw body content knowing that it doesn’t expect XML, it can shortcut by the whole XmlReader and XML serialization story by asking for the BodyContentType and pull out the raw body data as a stream layered over the buffer:

        /// <summary>
        /// Gets the raw body stream.
        /// </summary>
        /// <returns></returns>
      [MethodImpl(MethodImplOptions.Synchronized)]
      public Stream GetRawBodyStream()
      {
         if ( body == null)
         {
             body = new MemoryStream( buffer,0, bufferSize,false,true);
         }
         return body;
      }

        /// <summary>
        /// Gets the content type of the raw message body based on the Content-Type HTTP header
        /// contained in the HttpRequestMessageProperty or HttpResponseMessageProperty of this
        /// message. The value is null if the type is unknown.
        /// </summary>
        public string BodyContentType
        {
            get
            {
                if (Properties.ContainsKey(HttpRequestMessageProperty.Name))
                {
                    return ((HttpRequestMessageProperty)Properties[HttpRequestMessageProperty.Name]).Headers["Content-Type"];
                }
                if (Properties.ContainsKey(HttpResponseMessageProperty.Name))
                {
                    return ((HttpResponseMessageProperty)Properties[HttpResponseMessageProperty.Name]).Headers["Content-Type"];
                }
                return null;
            }
        }

There is a bit of caution required using this mechanism, though. Because the message State (Created, Written, Read, Copied, Closed) is controlled by the base-class and cannot be set by derived classes, the message should be considered to be in the State==MessageState.Read after calling the GetRawBodyStream() method. That doesn’t seem to be necessary because we have a buffer here, but for the streamed variant that’s a must. And for the sake of consistency we introduce this constraint here.

The BodyContentType property implementation seems, admittedly, a bit strange at first sight. Even though you won’t see the message properties being populated anywhere inside this class, we’re asking for them and base the content-type detection on their values. That only makes sense when we consider the way messages are being populated by Indigo. As I explained, the first thing that gets called once the transport has a raw data chunk or stream in its hands that it believes to be a message, it invokes the encoder. For incoming requests/messages, the encoder is really serving as the message factory constructing Message-derived instances over raw data. Once the encoder has constructed the message in one of the ReadMessage overloads, the message is returned to the transport. If the transport wants, it can then (and the HTTP transport does) stick properties into that newly created message and then hand it off to the rest of the channel infrastructure for processing and dispatching. Because these extensions are built for REST/POX and therefore have HTTP affinity, that’s precisely what we assume to be happening for the BodyContentType property and the CreateBodyReader() method below. As I already explained in Part 1, the HTTP transport will always add a HttpRequestMessageProperty  to the message and that’s consequently from which we can grab the content-type of the incoming request data.

        private XmlDictionaryReader CreateBodyReader()
        {
            XmlDictionaryReader reader = null;

            /*
             * Check whether the message properties indicate that this is a raw binary message.
             * In that case, we'll wrap the body with a PoxBase64XmlStreamReader
             */
            bool hasPoxEncoderProperty = Properties.ContainsKey(PoxEncoderMessageProperty.Name);
            if (!(hasPoxEncoderProperty && ((PoxEncoderMessageProperty)Properties[PoxEncoderMessageProperty.Name]).RawBinary))
            {
                string contentType = null;

                /*
                 * Check for whether either the HttpRequestMessageProperty or the HttpResponseMessageProperty
                 * are present. If so, extract the HTTP Content-Type header. Otherwise the content-type is
                 * assumed to be text/xml ("POX")
                 */
                bool hasRequestProperty = Properties.ContainsKey(HttpRequestMessageProperty.Name);
                bool hasResponseProperty = Properties.ContainsKey(HttpResponseMessageProperty.Name);
                if (hasResponseProperty)
                {
                    HttpResponseMessageProperty responseProperty =
                      Properties[HttpResponseMessageProperty.Name] as HttpResponseMessageProperty;
                    contentType = responseProperty.Headers["Content-Type"];
                }
                else if (hasRequestProperty)
                {
                    HttpRequestMessageProperty requestProperty =
                       Properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty;
                    contentType = requestProperty.Headers["Content-Type"];
                }

                if (contentType == null)
                {
                    contentType = "text/xml";
                }

                /*
                 * If the content type is text/xml (POX) we will create a plain XmlTextReader for the body.
                 */
                if (contentType.StartsWith("text/xml", StringComparison.OrdinalIgnoreCase))
                {
                   // do we only have a UTF byte-order mark?
                   if (_bufferSize <= 4)
                   {
                       // create a new reader over a fake infoset and place it on the EndElement
                      
reader = XmlDictionaryReader.CreateDictionaryReader(
                          new XmlTextReader(new StringReader("<no-data></no-data>")));
                       reader.Read(); reader.Read();
                   }
                   else
                  
{
                       reader = XmlDictionaryReader.CreateDictionaryReader(new XmlTextReader(GetRawBodyStream()));
                   }

                }
            }
            /*
             * If the content wasn't identified to be POX, we'll wrap it as binary. 
             */
            if (reader == null)
            {
                reader = XmlDictionaryReader.CreateDictionaryReader(new PoxBase64XmlStreamReader(GetRawBodyStream()));
            }
            return reader;
        }

The private CreateBodyReader() method that constructs XML readers for the both, the OnGetBodyReaderAtBodyContents() and the OnWriteBodyContents() overrides shown below, uses the same strategy to figure out the content-type of the message and therefore to guess what’s hidden inside the byte-array (or array segment) the message was constructed over. To make the message class useful for the request and response direction, we’ll distinguish there two separate cases here:

·         If the message is a response, the handling method in the user code might have indicated that it wants the encoder to serialize the message onto the wire in “raw binary” mode. The indicator for that is the presence of the PoxEncoderMessageProperty having the RawBinary property set to true. If that is the case, the reader we return is always our PoxBase64XmlStreamReader. The property cannot occur in request messages because the Indigo transports simply don’t know about it.

·         If the message is a request or a response with the mentioned property missing, we will try figuring out the message’s content-type using the described strategy of using the HTTP transport’s message properties. If we can’t figure out a content-type for a response (it’s optional for the responding handler code to supply it), we will assume that the content-type is “text/xml”. If the message is a request we can rely of getting a content-type as long as the underlying transport is Indigo’s HTTP transport implementation. If the content-type is indeed “text/xml” we construct an XmlTextReader over the raw data and return it. If the content-type is anything else, we use our PoxBase64XmlStreamReader wrapper, because we have to assume that the encapsulated data we’re dealing with is not XML.

The OnGetBodyReaderAtBodyContents() and the OnWriteBodyContents() overrides are consequently very simple:

        /// <summary>
        /// Called when the client requests a reader for the body contents.
        /// </summary>
        /// <returns></returns>
      protected override XmlDictionaryReader OnGetReaderAtBodyContents()
      {
         XmlDictionaryReader reader = CreateBodyReader();
         reader.MoveToContent();
         return reader;
      }

        /// <summary>
        /// Called when the client requests to write the body contents.
        /// </summary>
        /// <param name="writer">The writer.</param>
      protected override void OnWriteBodyContents(XmlDictionaryWriter writer)
      {
         XmlDictionaryReader reader = CreateBodyReader();
         writer.WriteNode(reader, false);
      }

What’s left to complete the message implementation are the compulsory overrides of the abstract properties of Message, for which we have backing fields declared at the top of the class:

        /// <summary>
        /// Gets the message version.
        /// </summary>
        /// <value>The message version.</value>
      public override MessageVersion Version
      {
         get
         {
            return MessageVersion.Soap11Addressing1;
         }
      }

        /// <summary>
        /// Gets the SOAP headers.
        /// </summary>
        /// <value>The headers.</value>
      public override MessageHeaders Headers
      {
         get
         {
            return headers;
         }
      }

        /// <summary>
        /// Gets the message properties.
        /// </summary>
        /// <value>The properties.</value>
      public override MessageProperties Properties
      {
         get
         {
            return properties;
         }
      }
    }
}

The PoxStreamedMessage is only different from this class insofar as that it doesn’t have the buffer management. The GetRawBodyStream() method immediately returns the encapsulated stream and the remaining implementation is largely equivalent, if not identical (yes, I should consolidate that into a base class). Therefore I am not pasting that class here as code but rather just append as a downloadable file, alongside the declaration of IPoxRawBodyMessage and the twice mentioned and not yet shown PoxEncoderMessageProperty class.

With this, we’ve got all the moving pieces we need to build what’s essentially becoming an Indigo-based, message-oriented web-server infrastructure with a REST-oriented programming model. What’s missing is how we get our encoder configured into a binding so that we can put it all together and run it.

Configuration is next; wait for part 8.

Download: PoxEncoderMessageProperty.zip
Download: PoxStreamedMessage.zip
Download: IPoxRawBodyMessage.zip

[2006-01-13: Updated PoxBufferedMessage code to deal with entity bodies that only consist of a UTF BOM]

Categories: Indigo

Part 1, Part 2, Part 3, Part 4, Part 5

I threw a lot of unexplained code at you in Part 5 and that wasn’t really fair.

The PoxEncoder class is a replacement for Indigo’s default TextMessageEncoder class that’s used by the HTTP transport unless you explicitly configure something different. Indigo comes with three built-in encoders, namely:

·         The TextMessageEncoder serializes Indigo’s internal Message into SOAP 1.1 or SOAP 1.2 envelopes using (applies only to the latter) the desired character encodings (UTF-8, UTF-16, etc.) and of course it also deserializes incoming SOAP envelopes into the Indigo representation.

·         The MtomMessageEncoder serializes messages into SOAP 1.2 messages as specified by the MTOM specification, which allows for a much more compact transmission of binary-heavy SOAP envelopes than if you were simply using base64Binary encoded element data. MTOM is a good choice whenever the size of binary content in a SOAP envelope far exceeds the size of the rest of the data. Your mileage may vary, so that’s a thing to measure carefully unless it’s blatantly obvious such as in the case of writing a service for a digital imaging library.

·         The BinaryMessageEncoder serializes messages into SOAP 1.2 envelopes, but does so in a very compact binary format that preserves the XML information set, but is not at all like XML text. The gist of the binary encoding is the assumption that if both communicating parties are implemented with Indigo and share the same contract, the metadata existing at both ends reduces the hints that need to go on the wire. In other words: The binary encoding doesn’t need to throw all  these lengthy XML tag names and namespace names explicitly onto the wire, but can refer to them by pointing to a dictionary that’s identically constructed on both ends. The binary encoding in Indigo is a bit like the modern-day, loosely coupled grand-child of NDR and “midl.exe /Oicf” if you like. What’s important to note about this encoding is that its primary design goal is performance and interoperability is in fact a non-goal. The BinaryMessageEncoder assumes Indigo endpoints. If you don’t like that, you can always use the text encoding, which is designed for interoperability.

Our PoxEncoder here differs from all three Indigo encoders in that it does specifically not serialize SOAP messages, but rather just the body contents of a Message.

In order for you to understand what’s happening here, I’ll pick the most relevant methods and explain them in detail. We will start with the Initialize() method that is invoked by all three constructor overloads:

/// <summary>
///
Initializes common properties of the encoder.
/// </summary>
private void Initialize()
{
  if (this.MessageVersion.Envelope == EnvelopeVersion.Soap12)
  {
    // set the aprorpiate media type for SOAP 1.2
     this. mediaType = "application/soap+xml";
  }
  else if (this.MessageVersion.Envelope == EnvelopeVersion.Soap11)
  {
    // set the appropriate media type for SOAP 1.1
     this. mediaType = "text/xml";
  }
  // compose the content type from charset and media type
  this. contentType = string.Format(CultureInfo.InvariantCulture, "{0}; charset={1}", mediaType, textEncoding.WebName);
}

It is required for each MessageEncoder-derived class to implement the abstract properties MediaType, ContentType, and MessageVersion, and therefore we have to initialize the backing fields for these properties properly and return meaningful values even though the PoxEncoder is exactly the “anti-SOAP” encoder. The message version specified in the encoder is relevant for Indigo higher up on the stack, because it needs to know what rules and constraints apply to Message instances as they are constructed and processed. The content type and media types are required by the transports so that they know what content and/or media type to specify as metadata in their transport frame (eg. the Content-Type header in HTTP). If we initialize the encoder with the Soap12 message version, it will consequently report the application/soap+xml media type, even though the encoder doesn’t ever write such envelopes to the wire. You might consider that a bug in the PoxEncoder and you might be right, but it doesn’t really matter. Because any methods can return all sorts of payloads, we will override the content-type on the message-level so that this information has really no effect. I do need to clean this up a little. Later.

Now let’s look at the parts that actually do the work. I will start with the two WriteMessage overloads.

The first overload’s signature is
     public override ArraySegment<byte> WriteMessage(Message msg, int maxMessageSize, BufferManager bufferManager, int messageOffset)
and is invoked by the transport whenever a message must be wire-encoded and the output transfer mode is set to TransferMode.Buffered or TransferMode.StreamedRequest (which implies a buffered response). The second overload’s signature is
    public override void WriteMessage(Message msg, System.IO.Stream stream)
and is invoked by the transport whenever a message must be wire-encoded and the output transfer mode is set of TransferMode.Streamed or TransferMode.StreamedResponse (which implies a streamed response).

The transfer-mode property is configurable on all of the pre-built HTTP bindings and on the <httpTransport> binding element. “Buffered” encoding means that the entire message is encoded at once and written into a buffer, which is then given to the the transport for sending. “Streamed” encoding means that the message is pushed into to a stream, whereby the stream is typically immediately layered directly over the transport. That means that whenever our encoder writes data to that stream, it is immediately pushed to the remote communication partner. The “streamed” mode is the optimal choice for sending very large messages that are, for instance, too big to be reasonably handled as a single memory block. The buffered mode is better (and faster) for compact messages. I’ll dissect the buffered variant first:

public override ArraySegment<byte> WriteMessage(Message msg, int maxMessageSize, BufferManager bufferManager, int messageOffset)
{
   if (msg.IsEmpty)
   {
      // if the message is empty (no body defined) the result is an empty
      // byte array.
      byte[] buffer = bufferManager.TakeBuffer(maxMessageSize);
      return new ArraySegment<byte>(buffer, 0, 0);
   } 

If the message is empty (that means: the body is empty), we request a buffer from the buffer manager and return an empty slice of that buffer, because this encoder’s output is “nothing” if the body is empty.The BufferManager is an Indigo helper class that manages a pool of pre-allocated buffers and serves to optimize memory management by avoiding the allocation and the discarding of buffers for every message. And encoder should therefore use the buffer manager argument and use it to obtain the buffers backing the array segment that is to be returned. Once the message has been handled by the transport, the transport will return the buffer into the pool.

   else
   {
      // check RawBinary bit in the message property
          bool rawBinary = false;
          if (msg.Properties.ContainsKey(PoxEncoderMessageProperty.Name))
      {
         rawBinary = ((PoxEncoderMessageProperty)msg.Properties[PoxEncoderMessageProperty.Name]).RawBinary;
      }

If the message is not empty (we have a body), we check whether there is a PoxEncoderMessageProperty present in the message. This property is a plain CLR class that is part of my extensions and has two significant properties: Name is a static, constant string value used as the key for the message properties collection and RawBinary is a Boolean instance value that contains and indicator for whether the encoder shall encode the data as XML or as raw binary data. The Message property collection is a simple dictionary of objects keyed by strings. The properties allow application-level code to interact with infrastructure-level code in the way illustrated by this property. Whenever I want the encoder to use its “raw binary” mode, I add this property to the message and the encoder can pick up the information.

      ArraySegment<byte> retval = new ArraySegment<byte>();
      byte[] buffer = bufferManager.TakeBuffer(maxMessageSize);
      if (!rawBinary)
      {
         // If we're rendering XML data, we construct a memory stream
         // over the output buffer, layer an XMLDictionaryWriter on top of it
         // and have the message write the body content into the buffer as XML.
         // The buffer is then wrapped into an array segment and returned.
         MemoryStream stream = new MemoryStream(buffer);
         XmlWriterSettings settings = new XmlWriterSettings();
         settings.OmitXmlDeclaration = true;
         settings.Indent = true;
         settings.Encoding = this. textEncoding;
         XmlWriter innerWriter = XmlWriter.Create(stream, settings);
         XmlDictionaryWriter writer = XmlDictionaryWriter.CreateDictionaryWriter(innerWriter, false);
         msg.WriteBodyContents(writer);
         writer.Flush();
         retval = new ArraySegment<byte>(buffer, 0, (int)stream.Position);
      }

Next we take a buffer from the buffer manager and if we’re not in “raw binary” mode, we’ll construct a memory stream over the buffer, construct an XmlDictionaryWriter over that stream and ask the message to render its “body contents” into the writer and therefore into the memory stream and into the buffer. The “body contents” of a message is what would be the child nodes of the <soap:Body> element, if we were using that (but we don’t). Once the body contents have been written, we flush the writer to make sure that all buffered data is committed into the underlying stream and then construct the return value as an array segment over the buffer with the length of the bytes written to the stream.

      else
      {
         // If we're rendering raw binary data, we grab at most 'buffer.Length'
         // bytes from the binary content of the base64Binary element (if that
         // exists) and return the result wrapped into an array segment.
         XmlDictionaryReader dictReader = msg.GetReaderAtBodyContents();
         if (dictReader.NodeType == XmlNodeType.Element &&
            dictReader.LocalName == "base64Binary")
         {
            if (dictReader.Read() && dictReader.NodeType == XmlNodeType.Text)
            {
               int size = dictReader.ReadContentAsBase64(buffer, 0, buffer.Length);
               retval = new ArraySegment<byte>(buffer, 0, size);
            }
         }
      }
      return retval;
   }
}
 

If the “raw binary” mode is to be used, we are making a bit of an assumption inside the encoder. The assumption is that the body content consists of a single element named “base64Binary” and that its content is just that: base64 binary encoded content. That is of course the other side of the PoxBase64XmlStreamReader trick I explained in Part 5. For binary data we simply assume here that the body reader is our wrapper class and this is how arbitrary binary data is smuggled through the Indigo infrastructure. The array segment to be returned is constructed by reading the binary data into the buffer and setting the array segment length to the number of bytes we could get from the element content.

The streamed version of WriteMessage is quite different:

public override void WriteMessage(Message msg, System.IO.Stream stream)
{
    try
    {
        if (!msg.IsEmpty)
        {
            // check RawBinary bit in the message property
            bool rawBinary = false;
            if (msg.Properties.ContainsKey(PoxEncoderMessageProperty.Name))
            {
                rawBinary = ((PoxEncoderMessageProperty)msg.Properties[PoxEncoderMessageProperty.Name]).RawBinary;
            }
            if (!rawBinary)
            {
                // If we're rendering XML, we layer an XMLDictionaryWriter over the
                // output stream and have the message render its body content into
                // that writer and therefore onto the stream.
                XmlWriterSettings settings = new XmlWriterSettings();
                settings.OmitXmlDeclaration = true;
                settings.Indent = true;
                settings.Encoding = this. textEncoding;
                XmlWriter innerWriter = XmlWriter.Create(stream, settings);
                XmlDictionaryWriter writer = XmlDictionaryWriter.CreateDictionaryWriter(innerWriter, false);
                msg.WriteBodyContents(writer);
                writer.Flush();
            }

The first significant difference is that if we’re using streams, we will simply ignore empty messages and do nothing with them. In streaming mode, the transport will do any setup work required for  sending a message before invoking the encoder and ready the output network stream so that the encoder can write to it. When the encoder returns, the transport considers the write action done. So if we don’t write to the output stream, there’s no payload data hitting the wire and that happens to be what we want.

If we have data and we’re not in “raw binary” mode, the encoder will construct an XmlDictionaryWriter over the supplied stream and have the message write its body contents to it. That’s all.

            Else
            {
                // If we're rendering raw binary data, we grab chunks of at most 1MByte
                // from the 'base64Binary' content element (if that exists) and write them
                // out as binary data to the output stream. Chunking is done, because we
                // have to assume that the body content is arbitrarily large. To optimize the
                // behavior for large streams, we read and write concurrently and swap buffers.
                XmlDictionaryReader dictReader = msg.GetReaderAtBodyContents();
                if (dictReader.NodeType == XmlNodeType.Element && dictReader.LocalName == "base64Binary")
                {
                    if (dictReader.Read() && dictReader.NodeType == XmlNodeType.Text)
                    {
                        byte[] buffer1 = new byte[1024*1024], buffer2 = new byte[1024*1024];
                        byte[] readBuffer = buffer1, writeBuffer = buffer2;
                       
                        int bytesRead = 0;
                        // read the first chunk into the read buffer
                        bytesRead = dictReader.ReadContentAsBase64(readBuffer, 0, readBuffer.Length);
                        do
                        {
                            // the abort condition for the loop is that we can't read
                            // any more bytes from the input because the base64Binary element is
                            // exhausted.
                            if (bytesRead > 0 )
                            {
                                // make the last read buffer the write buffer
                                writeBuffer = readBuffer;
                                // write the write buffer to the output stream asynchronously
                                IAsyncResult result = stream.BeginWrite(writeBuffer, 0, bytesRead,null,null);
                                // swap the read buffer
                                readBuffer = (readBuffer == buffer1) ? buffer2 : buffer1;
                                // read a new chunk into the 'other' buffer synchronously
                                bytesRead = dictReader.ReadContentAsBase64(readBuffer, 0, readBuffer.Length);
                                // wait for the write operation to complete
                                result.AsyncWaitHandle.WaitOne();
                                stream.EndWrite(result);
                            }
                        }
                        while (bytesRead > 0);
                    }
                }
            }
        }
    }
    catch
    {
        // the client may disconnect at any time, so that's an expected exception and absorbed.
    }
}

In streamed “raw binary” mode things get a bit more complicated. Under these circumstances we assume that the output we are sending is HUGE. The use-case I had in mind when I wrote this is the download of multi-GByte video recordings. Therefore I construct two 1MByte buffers that are used in turns to read a chunk of data from the source body reader (for which we make the same content assumption as for the buffered case: This is believed to be a PoxBase64XmlStreamReader compatible infoset) and asynchronously push the read data into the output stream.

Because it may take a while to get a huge data stream to the other side, a lot of things can happen to the network connection during that time. Therefore the encoder fully expects that the network connection terminates unexpectedly. If that happens, we’ll catch and absorb the network exception and happily return to the caller as if we’re done.

Compared to all the complexity of the WriteMessage  overloads, the respective ReadMessage methods look fairly innocent, simple, and similar:

/// <summary>
///
Reads an incoming array segment containing a message and
/// wraps it with a buffered message. The assumption is that the incoming
/// data stream is <i>not</i> a SOAP envelope, but rather an unencapsulated
/// data item, may it be some raw binary, an XML document or HTML form
/// postback data. This method is called if the inbound transfer mode of the
/// transport is "buffered".
/// </summary>
///
<param name="buffer">Buffer to wrap</param>
///
<param name="bufferManager">Buffer manager to help with allocating a copy</param>
///
<returns>Buffered message</returns>
public override Message ReadMessage(ArraySegment<byte> buffer, BufferManager bufferManager)
{
   return new PoxBufferedMessage(buffer, bufferManager);
}

/// <summary>
///
Reads an incoming stream containing a message and
/// wraps it with a streamed message. The assumption is that the incoming
/// data stream is <i>not</i> a SOAP envelope, but rather an unencapsulated
/// data item, may it be some raw binary, an XML document or HTML form
/// postback data. This method is called if the inbound transfer mode of the
/// transport is "streamed".
/// </summary>
///
<param name="stream">Input stream</param>
///
<param name="maxSizeOfHeaders">Maximum size of headers in bytes</param>
///
<returns>Stream message</returns>
public override Message ReadMessage(System.IO.Stream stream, int maxSizeOfHeaders)
{
   return new PoxStreamedMessage(stream, maxSizeOfHeaders);
}

Both variants take the raw incoming data (whatever it is) and hand it to the PoxStreamMessage class or PoxBufferedMessage class that adopt the buffer or stream as their body content, respectively. I’ll explain those in Part 7.

Happy New Year!

Categories: Indigo

Part 1, Part 2, Part 3, Part 4

POX means “plain old XML” and I’ve also heard a definition saying that “POX is REST without the dogma”, but that’s not really correct. POX is not really well defined, but it’s clear that the “plain” is the focus and that means typically that folks who talk about “POX web services” explicitly mean that those services don’t use SOAP. You could see POX as an antonym for SOAP.

The design of Indigo (WCF) assumes that all messages that go onto the wire and come from the wire have a shape that is aligned with SOAP. That means that they have a “body” containing the user-defined message payload and a “header” section that contains the out-of-band metadata that helps getting the message from one place to the next, possibly through a chain of intermediaries. Most of the Indigo binding elements and their implementations also assume that those metadata elements (headers) conform to their respective WS-* standard that they are dealing with.

However, Indigo isn’t hard-wired to a specific envelope format. The default “encoders” that are responsible for turning a message into data stream (or a data package) that a transport can throw down a TCP socket or into a message queue (or whatever else) and which are likewise responsible for picking up the data from the wire to turn them into Message objects have two envelope formats baked in: SOAP 1.1 and SOAP 1.2. But that doesn’t mean that you have to use those. If your envelope format were different (there seem to be thousands, I’ll name AdsML [spec] as an example) and that’s what you want to use on the wire, you can assemble a binding that will compose an Indigo transport with your encoder. Moving away from SOAP means, though, that you can’t use the standard implementations of capabilities such as message-level security, reliable delivery, and transaction flow, because all of these are built on the assumption that you are exchanging WS-* headers with the other party and all of these specs depend on the SOAP information model. But if there are comparable specifications that come with your envelope format you can of course write Indigo extensions that you can configure into a binding just like you can compose the default binding elements. It’d be a lot of work to do that, but you’d still benefit greatly from the Indigo architecture per-se.

When we want to use a REST/POX model, our envelope format is quite simple: We don’t really have an envelope.

The idea of POX is that there’s only payload and that out-of-band metadata is unnecessary fluff. The idea of REST is that there is already and appropriate place for out-of-band metadata and that’s the HTTP headers.

In order to make REST/POX work, we therefore need to replace the Indigo default encoder with an encoder that fulfills these requirements:

1.      Extract the message body XML content of any outbound message and format it for the wire as-is and without a SOAP envelope around it and

2.      Accept an arbitrary inbound XML data and wrap it into a Message-derived class so that Indigo can handle it.

Since the use-case in whose context I’ve developed these extensions is a bit more far reaching than POX, but I indeed want to support RESTful access to any data including multi-GByte unencapsulated MPEG recordings I make on my Media PC, I’ve broadened these two requirements a bit and left out the “XML” constraint:

1.      Extract the message body XML content of any outbound message and format it for the wire as-is and without a SOAP envelope around it and

2.      Accept an arbitrary inbound XML data and wrap it into a Message-derived class so that Indigo can handle it.

XML aka POX is an interesting content-type to throw around, but it’s by no means the only one and therefore let’s not restrict ourselves too much here. Any content is good.

But then again, Indigo is assuming that all messages flowing through its channels contain XML payloads and therefore we’ve got a bit of a nut to crack when we want to use Indigo for arbitrary, non-XML payloads of arbitrary size. Luckily, XML is just an illusion.

The Indigo Message holds the message body content inside an XmlDictionaryReader (which is an optimized derivation of the well-known XmlReader). To construct a message, you can walk up to the static Message.CreateMessage(string action, XmlDictionaryReader reader) factory method and pass the readily formatted body content as a reader object and the message will happily adopt it. But can we use the XmlReader to smuggle arbitrary binary content into the message so that our own encoder can later unwrap it and put it onto the wire in whatever raw binary format we like? Sure we can! The class below may look a bit like an evil hack, but it’s a perfectly legal construct:

using System;
using System.Collections.Generic;
using System.Text;
using System.Xml;
using System.IO;
using System.Xml.Schema;

namespace newtelligence.ServiceModelExtensions
{
   public class PoxBase64XmlStreamReader :
XmlTextReader
   {
      private const string xmlEnvelopeString =
           "<base64Binary xmlns:xsi=\"" + XmlSchema.InstanceNamespace + "\" " +
           "xmlns:xsd=\"" + XmlSchema.Namespace + "\" " +
           "xsi:type=\"xsd:base64Binary\">placeholder</base64Binary>";
      Stream innerStream;

      ///
<summary>
      /// Initializes a new instance of the <see cref="T:PoxBase64XmlStreamReader"/>
class.
      ///
</summary>
      /// <param name="stream">The stream.
</param>
      public PoxBase64XmlStreamReader(Stream stream)
         : base(new StringReader(xmlEnvelopeString))
      {
         innerStream = stream;
      }

      ///
<summary>
      ///
Gets The Common Language Runtime (CLR) type for the current node.
      ///
</summary>
      ///
<value></value>
      /// <returns>The CLR type that corresponds to the typed value of the node. The default is System.String.
</returns>
      public override Type ValueType
      {
         
get
         {
            if (NodeType == XmlNodeType.Text && base.Value == "placeholder")
            {
               return typeof(Byte[]);
            }
            
else
            {
               return base.ValueType;
            }
         }
      }
   
      ///
<summary>
      ///
Gets the text value of the current node.
      ///
</summary>
      public override string Value
      {
         
get
         {
            if (NodeType == XmlNodeType.Text && base.Value == "placeholder")
            {
               BinaryReader reader = new BinaryReader(innerStream);
               return Convert.ToBase64String(reader.ReadBytes((int)(reader.BaseStream.Length - reader.BaseStream.Position)));
            }
            return base.Value;
         }
      }

      ///
<summary>
      ///
Reads the content and returns the Base64 decoded binary bytes.
      ///
</summary>
      /// <param name="buffer">The buffer into which to copy the resulting text. This value cannot be null.
</param>
      /// <param name="index">The offset into the buffer where to start copying the result.
</param>
      /// <param name="count">The maximum number of bytes to copy into the buffer. The actual number of bytes copied is returned from this method.
</param>
      ///
<returns>
      ///
The number of bytes written to the buffer.
      ///
</returns>
      public override int ReadContentAsBase64(byte[] buffer, int index, int count)
      {
         if (NodeType == XmlNodeType.Text && base.Value == "placeholder")
         {
            return innerStream.Read(buffer, index, count);
         }
         
else
         {
            return base.ReadContentAsBase64(buffer, index, count);
         }
      }
   }
}

The PoxBase64XmlStreamReader is a specialized XML reader reading a fixed info-set constructed from a string that has a “placeholder” in whose place the content of a wrapped data stream is returned “as base64 encoded content”. Of course that latter statement is hogwash. The data is never encoded in base64 anywhere. But the consumer of the reader thinks that it is and that’s really good enough for us here. The XmlReader creates the illusion that the wrapped data stream were the “text” node of a base64Binary typed element and if that’s what the client wants to believe, we’re happy.  The implementation trick here is of course very simple. As long as the reader isn’t hitting the text node with the “placeholder” all work is being delegated to the base class. Once we arrive at that particular node, we change tactics and return the data type (byte[]) and the content of the wrapped stream instead of the “placeholder” string. After that we continue delegating to the base class. If the client asks for the Value of the text node, we are returning a base64 encoded string representation of the wrapped stream which might end up being pretty big. However, if the client is a bit less naïve about the content, it will figure that the data type is byte[] and therefore retrieve the data in binary chunks through the ReadContentAsBase64() method. Let’s assume that the client will be that clever.

It doesn’t take too much imagination talent to do so, because I’ve got the client right here. I used Doug Purdy’s PoxEncoder that he showed at PDC05 as a basis for this and extended it (quite) a bit:

using System;
using System.IO;
using System.Xml;
using System.Text;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Design;
using System.Runtime.CompilerServices;
using System.ServiceModel.Configuration;
using System.Configuration;
using System.Globalization;
using System.Xml.Schema;
using System.Diagnostics;

namespace newtelligence.ServiceModelExtensions
{
    /// <summary>
    /// This class is a wire-format encoder for System.ServiceModel that renders
    /// only the content (body) of a <see cref="T:Message"/> onto the wire, but not
    /// the surrounding SOAP message elements such as the enevlope, the headers or
    /// the body element. Likewise, the encoder expects input to be in 'raw', unwrapped
    /// form and will wrap it into a message for processing by the System.ServiceModel
    /// infrastructure.
    /// </summary>
    public class PoxEncoder : MessageEncoder
   {
      string contentType;
      string mediaType;
      Encoding textEncoding;
      MessageVersion messageVersion;

      /// <summary>
      /// Creates a new instance of PoxEncoder
      /// </summary>
      public PoxEncoder()
      {
          messageVersion = MessageVersion.Soap11Addressing1;
          textEncoding = Encoding.UTF8;
         Initialize();
      }

      /// <summary>
      /// Creates a new instance of PoxEncoder
      /// </summary>
      /// <param name="messageVersion"></param>
      public PoxEncoder(MessageVersion messageVersion)
      {
         this. messageVersion = messageVersion;
          textEncoding = Encoding.UTF8;
         Initialize();
      }


      /// <summary>
      /// Creates a new instance of PoxEncoder
      /// </summary>
      /// <param name="textEncoding"></param>
      /// <param name="messageVersion"></param>
      public PoxEncoder(Encoding textEncoding, MessageVersion messageVersion)
      {
         this. textEncoding = textEncoding;
         this. messageVersion = messageVersion;
         Initialize();
      }

        /// <summary>
        /// Initializes common properties of the encoder.
        /// </summary>
      private void Initialize()
      {
         if (this.MessageVersion.Envelope == EnvelopeVersion.Soap12)
         {
                // set the aprorpiate media type for SOAP 1.2
            this. mediaType = "application/soap+xml";
         }
         else if (this.MessageVersion.Envelope == EnvelopeVersion.Soap11)
         {
                // set the appropriate media type for SOAP 1.1
            this. mediaType = "text/xml";
         }
            // compose the content type from charset and media type
         this. contentType = string.Format(CultureInfo.InvariantCulture, "{0}; charset={1}", mediaType, textEncoding.WebName);
      }

        /// <summary>
        /// Gets the content type for the encoder instance
        /// </summary>
      public override string ContentType
      {
         get
         {
            return contentType;
         }
      }

        /// <summary>
        /// Gets the media type for the encoder instance
        /// </summary>
      public override string MediaType
      {
         get
         {
            return mediaType;
         }
      }

        /// <summary>
        /// Gets an indicator for whether a given input content type is
        /// supported.
        /// </summary>
        /// <param name="contentType">ContentType</param>
        /// <returns>Indicates whether the content type is supported</returns>
        /// <remarks>
        /// TODO: This currently returns 'true' for all content types because the
        /// encoder isn't locked down in features yet and this easier to debug.
        /// The plan is to support at least: application/x-www-form-urlencoded,
        /// text/xml, application/soap+xml
        /// </remarks>
      public override bool IsContentTypeSupported(string contentType)
      {
         return true;
      }

        /// <summary>
        /// Gets the supported message version of this instance
        /// </summary>
      public override MessageVersion MessageVersion
      {
         get
         {
            return messageVersion;
         }
      }

        /// <summary>
        /// Reads an incoming array segment containing a message and
        /// wraps it with a buffered message. The assumption is that the incoming
        /// data stream is <i>not</i> a SOAP envelope, but rather an unencapsulated
        /// data item, may it be some raw binary, an XML document or HTML form
        /// postback data. This method is called if the inbound transfer mode of the
        /// transport is "buffered".
        /// </summary>
        /// <param name="buffer">Buffer to wrap</param>
        /// <param name="bufferManager">Buffer manager to help with allocating a copy</param>
        /// <returns>Buffered message</returns>
        public override Message ReadMessage(ArraySegment<byte> buffer, BufferManager bufferManager)
      {
         return new PoxBufferedMessage(buffer, bufferManager);
      }

        /// <summary>
        /// Transforms an incoming message into a raw byte array that a transport can
        /// literally put on the wire as it is returned. This method is called if the outbound
        /// transfer mode of the transport is "buffered".
        /// </summary>
        /// <param name="msg">Input message</param>
        /// <param name="maxMessageSize">Maximum message size to be rendered</param>
        /// <param name="bufferManager">Buffer manager to optimize buffer allocation</param>
        /// <param name="messageOffset">Offset into the message to render.</param>
        /// <returns>Array segment containing the binary data to be put onto the wire by the transport.</returns>
        /// <remarks>
        /// <para>This method is the "secret sauce" of the the PoxEncoder. Instead of encoding the
        /// message in its entirety, this encoder will unwrap the message body and toss out
        /// the envelope and all headers. The resulting "raw" message body (everything inside
        /// and not including soap:Body) will be written out to the transport.</para>
        /// <para>The encoder has an optional, "out of band" argument that is flowing into it
        /// as part of the message's Properties. By adding a <see cref="T:PoxEncoderMessageProperty"/>
        /// to the <see cref="Message.Properties"/> and setting its <see cref="PoxEncoderMessageProperty.RawBinary"/>
        /// property to 'true', you can switch the encoder into its 'raw binary' mode.</para>
        /// <para> In 'raw binary' mode, the encoder expects that the only child of the message
        /// body element is an element with a local name of "base64Binary" containing base64 encoded
        /// binary data. If that is the case, the encoder will read the content of that element
        /// and return it (not the XML wrapper) to the transport in binary form. If the content does
        /// not comply with this requirement, an empty array is returned.
        /// </para>
        /// </remarks>
      public override ArraySegment<byte> WriteMessage(Message msg, int maxMessageSize, BufferManager bufferManager, int messageOffset)
      {
         if (msg.IsEmpty)
         {
            // if the message is empty (no body defined) the result is an empty
            // byte array.
            byte[] buffer = bufferManager.TakeBuffer(maxMessageSize);
            return new ArraySegment<byte>(buffer, 0, 0);
         }
         else
         {
            // check RawBinary bit in the message property
                bool rawBinary = false;
                if (msg.Properties.ContainsKey(PoxEncoderMessageProperty.Name))
            {
               rawBinary = ((PoxEncoderMessageProperty)msg.Properties[PoxEncoderMessageProperty.Name]).RawBinary;
            }

            ArraySegment<byte> retval = new ArraySegment<byte>();
            byte[] buffer = bufferManager.TakeBuffer(maxMessageSize);
            if (!rawBinary)
            {
               // If we're rendering XML data, we construct a memory stream
               // over the output buffer, layer an XMLDictionaryWriter on top of it
               // and have the message write the body content into the buffer as XML.
               // The buffer is then wrapped into an array segment and returned.
               MemoryStream stream = new MemoryStream(buffer);
               XmlWriterSettings settings = new XmlWriterSettings();
               settings.OmitXmlDeclaration = true;
               settings.Indent = true;
               settings.Encoding = this. textEncoding;
               XmlWriter innerWriter = XmlWriter.Create(stream, settings);
               XmlDictionaryWriter writer = XmlDictionaryWriter.CreateDictionaryWriter(innerWriter, false);
               msg.WriteBodyContents(writer);
               writer.Flush();
               retval = new ArraySegment<byte>(buffer, 0, (int)stream.Position);
            }
            else
            {
               // If we're rendering raw binary data, we grab at most 'buffer.Length'
               // bytes from the binary content of the base64Binary element (if that
               // exists) and return the result wrapped into an array segment.
               XmlDictionaryReader dictReader = msg.GetReaderAtBodyContents();
               if (dictReader.NodeType == XmlNodeType.Element &&
                  dictReader.LocalName == "base64Binary")
               {
                  if (dictReader.Read() && dictReader.NodeType == XmlNodeType.Text)
                  {
                     int size = dictReader.ReadContentAsBase64(buffer, 0, buffer.Length);
                     retval = new ArraySegment<byte>(buffer, 0, size);
                  }
               }
            }
            return retval;
         }
      }

        /// <summary>
        /// Reads an incoming stream containing a message and
        /// wraps it with a streamed message. The assumption is that the incoming
        /// data stream is <i>not</i> a SOAP envelope, but rather an unencapsulated
        /// data item, may it be some raw binary, an XML document or HTML form
        /// postback data. This method is called if the inbound transfer mode of the
        /// transport is "streamed".
        /// </summary>
        /// <param name="stream">Input stream</param>
        /// <param name="maxSizeOfHeaders">Maximum size of headers in bytes</param>
        /// <returns>Stream message</returns>
      public override Message ReadMessage(System.IO.Stream stream, int maxSizeOfHeaders)
      {
         return new PoxStreamedMessage(stream, maxSizeOfHeaders);
      }

        /// <summary>
        /// Transforms an incoming message into a stream that a transport can
        /// literally put on the wire as it is filled. This method is called if the outbound
        /// transfer mode of the transport is "streamed".
        /// </summary>
        /// <param name="msg">Input message</param>
        /// <param name="stream">Stream to write to</param>
        /// /// <remarks>
        /// <para>This method is the "secret sauce" of the the PoxEncoder. Instead of encoding the
        /// message in its entirety, this encoder will unwrap the message body and toss out
        /// the envelope and all headers. The resulting "raw" message body (everything inside
        /// and not including soap:Body) will be written out to the transport.</para>
        /// <para>The encoder has an optional, "out of band" argument that is flowing into it
        /// as part of the message's Properties. By adding a <see cref="PoxEncoderMessageProperty"/>
        /// to the <see cref="Message.Properties"/> and setting its <see cref="PoxEncoderMessageProperty.RawBinary"/>
        /// property to 'true', you can switch the encoder into its 'raw binary' mode.</para>
        /// <para> In 'raw binary' mode, the encoder expects that the only child of the message
        /// body element is an element with a local name of "base64Binary" containing base64 encoded
        /// binary data. If that is the case, the encoder will read the content of that element
        /// and write it (not the XML wrapper) onto the stream in binary form and in at most
        /// 1MByte large chunks. If the content does not comply with this requirement, nothing is written.
        /// </para>
        /// </remarks>
        public override void WriteMessage(Message msg, System.IO.Stream stream)
        {
            try
            {
                if (!msg.IsEmpty)
                {
                    // check RawBinary bit in the message property
                    bool rawBinary = false;
                    if (msg.Properties.ContainsKey(PoxEncoderMessageProperty.Name))
                    {
                        rawBinary = ((PoxEncoderMessageProperty)msg.Properties[PoxEncoderMessageProperty.Name]).RawBinary;
                    }

                    if (!rawBinary)
                    {
                        // If we're rendering XML, we layer an XMLDictionaryWriter over the
                        // output stream and have the message render its body content into
                        // that writer and therefore onto the stream.
                        XmlWriterSettings settings = new XmlWriterSettings();
                        settings.OmitXmlDeclaration = true;
                        settings.Indent = true;
                        settings.Encoding = this. textEncoding;
                        XmlWriter innerWriter = XmlWriter.Create(stream, settings);
                        XmlDictionaryWriter writer = XmlDictionaryWriter.CreateDictionaryWriter(innerWriter, false);
                        msg.WriteBodyContents(writer);
                        writer.Flush();
                    }
                    else
                    {
                        // If we're rendering raw binary data, we grab chunks of at most 1MByte
                        // from the 'base64Binary' content element (if that exists) and write them
                        // out as binary data to the output stream. Chunking is done, because we
                        // have to assume that the body content is arbitrarily large. To optimize the
                        // behavior for large streams, we read and write concurrently and swap buffers.
                        XmlDictionaryReader dictReader = msg.GetReaderAtBodyContents();
                        if (dictReader.NodeType == XmlNodeType.Element && dictReader.LocalName == "base64Binary")
                        {
                            if (dictReader.Read() && dictReader.NodeType == XmlNodeType.Text)
                            {
                                byte[] buffer1 = new byte[1024*1024], buffer2 = new byte[1024*1024];
                                byte[] readBuffer = buffer1, writeBuffer = buffer2;
                               
                                int bytesRead = 0;
                                // read the first chunk into the read buffer
                                bytesRead = dictReader.ReadContentAsBase64(readBuffer, 0, readBuffer.Length);
                                do
                                {
                                    // the abort condition for the loop is that we can't read
                                    // any more bytes from the input because the base64Binary element is
                                    // exhausted.
                                    if (bytesRead > 0 )
                                    {
                                        // make the last read buffer the write buffer
                                        writeBuffer = readBuffer;
                                        // write the write buffer to the output stream asynchronously
                                        IAsyncResult result = stream.BeginWrite(writeBuffer, 0, bytesRead,null,null);
                                        // swap the read buffer
                                        readBuffer = (readBuffer == buffer1) ? buffer2 : buffer1;
                                        // read a new chunk into the 'other' buffer synchronously
                                        bytesRead = dictReader.ReadContentAsBase64(readBuffer, 0, readBuffer.Length);
                                        // wait for the write operation to complete
                                        result.AsyncWaitHandle.WaitOne();
                                        stream.EndWrite(result);
                                    }
                                }
                                while (bytesRead > 0);
                            }
                        }
                    }
                }
            }
            catch
            {
                // the client may disconnect at any time, so that's an expected exception and absorbed.
            }
        }
   }
}


The encoder shown above fulfills my two requirements and it is aware of the PoxBase64XmlReader trickery. It renders unencapsulated data onto the wire and accepts and wraps unencapsulated data from the wire. Furthernore, it supports buffered messages and it supports Indigo’s streaming mode, which allows sending messages of arbitrary size. What’s still missing in the picture is how we hook the encoder into the binding and how we can control whether the encoder works in “POX mode” rending XML or in “Raw Binary” mode rendering arbitrary data content. I might also have to explain what a PoxStreamedMessage is. I might also have to explain a bit better what the encoder does to begin with ;-)

Well, at least you have the code already, Part 6 comes with the prose. 

Categories: Indigo

Part 1, Part 2, Part 3

The SuffixFilter that I have shown in Part 3 of this little series interacts with the Indigo dispatch internals to figure out which endpoint shall receive an incoming request. If the filter reports true from it’s Match() method, the service endpoint that owns the particular filter is being picked and its channel gets the message. But at that point we still don’t know which of the operations on the endpoint’s contract shall be selected to handle the request.

We’ll take a step back and recap what we have by citing one of the contract declarations from Part 1:

[ServiceContract, HttpMethodOperationSelector]
interface IMyApp
{
    [OperationContract, HttpMethod("GET",UriSuffix="/customers/*")]
    CustomerInfo GetCustomerInfo();
    [OperationContract, HttpMethod("PUT", UriSuffix = "/customers/*")]
    void UpdateCustomerInfo(CustomerInfo info);
    [OperationContract, HttpMethod("DELETE", UriSuffix = "/customers/*")]
    void DeleteCustomerInfo();
}

If we implement this contract on a class and host the service endpoint for it at, say, http://www.example.com/myapp this particular endpoint will only accept requests on http://www.example.com/myapp/customers/* (whereby ‘*’ can really be any string) because our suffix filter that’s being hooked in my the HttpMethodOperationSelectorAttribute and populated with “/customers/*” suffix won’t let any other request pass. Only those requests for which a pattern match can be found when combining an operation’s suffix pattern with the endpoint URI are positively matched by the suffix filter. For a more complex example I’ll let you peek at a (shortened) snippet of one the contracts of the TV server I am working on:

/// <summary>
///
Contract for the channel service
/// </summary>
[ServiceContract(Namespace = Runtime.ChannelServiceNamespaceURI), HttpMethodOperationSelector]
public interface IChannelService
{
    /// <summary>
    /// Gets the default RSS for this channel.
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message with 'text/xml' RSS content</returns>
    [OperationContract, HttpMethod("GET")]
    Message GetRss(Message message);
    /// <summary>
    /// Gets the channel logo as a raw binary image with appropriate
    /// media type, typically image/gif, image/jpeg or image/png
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message with 'image/*' binary content</returns>
    [OperationContract, HttpMethod("GET", UriSuffix = "/logo")]
    Message GetLogo(Message message);
    /// <summary>
    /// Gets the RSS for "now", which is typically including
    /// the next 12 hours of guide data from the current time
    /// onward and including currently running shows.
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message with 'text/xml' <a href="http://blogs.law.harvard.edu/tech/rss">
    /// RSS 2.0</a> content</returns>
    [OperationContract, HttpMethod("GET", UriSuffix = "/now")]
    Message GetRssForNow(Message message);
   
    ...

    /// <summary>
    /// Gets an ASX media metadata document containing a reference to
    /// the live TV stream for this channel and a reference to the
    /// HTMLView that provides the UI inside Windows Media Player.
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message with 'video/x-ms-asf' <a href="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/wmplay10/mmp_sdk/asxelement.asp">
    /// ASX 3.0</a> content.</returns>
    [OperationContract, HttpMethod("GET", UriSuffix = "/media")]
    Message GetMedia(Message message);
    /// <summary>
    /// Gets information about the current media session hosted by the provider.
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message with 'text/xml' content</returns>
    [OperationContract, HttpMethod("GET", UriSuffix = "/media/session")]
    Message GetMediaSession(Message message);
    /// <summary>
    /// Gets the "media display envelope". This is an HTML stream that is loaded
    /// by Windows Media Player to render an AJAX UI for accessing this service.
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message with 'text/html' content</returns>
    [OperationContract, HttpMethod("GET", UriSuffix = "/media/envelope")]
    Message GetMediaDisplayEnvelope(Message message);
    /// <summary>
    /// Gets a media display envelope collateral data element. This method
    /// acts as a web-server and serves up binary files or text files referenced
    /// by the media display envelope. Requests to this endpoint are HTTP GET
    /// requests to the service base URL with the suffix '/media/envelope' with an
    /// appended '/' and the file name of the file that is being requested from the
    /// service runtime's 'envelope' directory.
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message containing a raw binary file with appropriate media type</returns>
    [OperationContract, HttpMethod("GET", UriSuffix = "/media/envelope/*")]
    Message GetMediaDisplayEnvelopeCollateral(Message message);
    /// <summary>
    /// Gets the detail information for a particular episode
    /// in the EPG guide data (linked from RSS) or for a given
    /// recording.
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message containing 'text/xml' with detail information.</returns>
    [OperationContract, HttpMethod("GET", UriSuffix = "/item/?")]
    Message GetItemDetail(Message message);
    /// <summary>
    /// Adds detail information for a particular episode. Concretely this
    /// allows adding a recoding job to the episode data that will cause this
    /// show to be recorded.
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message with HTTP 200 OK status code</returns>
    [OperationContract, HttpMethod("POST", UriSuffix = "/item/?")]
    Message PostItemDetail(Message message);
    /// <summary>
    /// Deletes some of the item detail information for a particular episode.
    /// This is used to cacnel a recording for the episode.
    /// </summary>
    /// <param name="message">Input message.</param>
    /// <returns>Reply message with HTTP 200 OK status code.</returns>
    [OperationContract, HttpMethod("DELETE", UriSuffix = "/item/?")]
    Message DeleteItemDetail(Message message);
    /// <summary>
    /// Method receiving all unknown messages sent to this endpoint
    /// </summary>
    /// <param name="message">The message</param>
    /// <returns></returns>
    [OperationContract(Action = "*")]
    Message HandleUnknownMessage(Message message);
}

If you look at the individual operations in the above contract, you’ll see that the suffix filter would – given a base address of http://www.example.com/TV – match requests made on the URIs  http://www.example.com/TV/logo,  http://www.example.com/TV/now, and http://www.example.com/TV/media to name just a few. A special case is the GetRss() operation, which does not have an explicit suffix defined and therefore causes the suffix filter to match on the base address. An important aspect of the suffix filter is that it does not consider the HTTP method (GET, POST). Matching the HTTP method to an operation is the job of the HttpMethodOperationSelectorBehavior, which acts higher up on the endpoint level and picks out the exact method that the call is being dispatched to. The filter is only deciding whether the message is “ours” with respect to the namespace it is targeting.

The HttpMethodOperationSelectorBehavior is hooked into the service endpoint by the HttpMethodOperationSelectorAttribute’s implementation of IContractBehavior that you can look up in Part 3. In BindDispatch(), the dispatcher’s OperationSelector property is set to a new instance of our specialized operation selector. An “operation selector” is a class that takes an incoing request on an endpoint and figures out the proper operation to dispatch to. The default operation selector in Indigo acts according to the SOAP dispatch rules that I explained in Part 1 (see “Figuring out a programming model”).

However, in our REST/POX world that we’re building here we do not have a concept of “SOAP action”, but rather URIs and HTTP methods and therefore the default dispatch mechanism doesn’t take us very far. Hence, we need to replace the operation selection algorithm with our own and we do that by implementing IDispatchOperationSelector:

using System;
using System.Collections.Generic;
using System.Text;
using System.Text.RegularExpressions;
using System.ServiceModel;
using System.ServiceModel.Configuration;
using System.ServiceModel.Channels;

namespace newtelligence.ServiceModelExtensions
{
    /// <summary>
    ///
    /// </summary>
   public class HttpMethodOperationSelectorBehavior : IDispatchOperationSelector
   {
      ContractDescription description;
      IDispatchOperationSelector defaultSelector;

        /// <summary>
        /// Initializes a new instance of the <see cref="T:HttpMethodOperationSelectorBehavior"/> class.
        /// </summary>
        /// <param name="description">The description.</param>
        /// <param name="defaultSelector">The default selector.</param>
      public HttpMethodOperationSelectorBehavior(ContractDescription description, IDispatchOperationSelector defaultSelector)
      {
         this.description = description;
         this.defaultSelector = defaultSelector;
      }

        /// <summary>
        /// Selects the operation.
        /// </summary>
        /// <param name="message">The message.</param>
        /// <returns></returns>
      public string SelectOperation(ref Message message)
      {
         if (message.Properties.ContainsKey(HttpRequestMessageProperty.Name))
         {
             HttpRequestMessageProperty msgProp =
                 message.Properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty;
             string baseUriPath = message.Headers.To.AbsolutePath;
             List<OperationDescription> operationsWithSuffix = new List<OperationDescription>();

             /* Check methods with UriSuffix first. For that we first add
              * operation descriptions that have the correct http method into
              * a list and then sort that list by the processing order */
             foreach (OperationDescription opDesc in description.Operations)
             {
                 HttpMethodAttribute methodAttribute = opDesc.Behaviors.Find<HttpMethodAttribute>();
                 if (methodAttribute != null &&
                      String.Compare(methodAttribute.Method, msgProp.Method, true) == 0 &&
                      methodAttribute.UriSuffix != null)
                 {
                     operationsWithSuffix.Add(opDesc);
                 }
             }

             /*
              * We are sorting the list based on two criteria:
              * a) ProcessingPriority value, and if that's equal:
              * b) Length of the UriSuffix expression
              */
             operationsWithSuffix.Sort(
                 delegate(OperationDescription descA, OperationDescription descB)
                 {
                     HttpMethodAttribute descAAttr = descA.Behaviors.Find<HttpMethodAttribute>();
                     HttpMethodAttribute descBAttr = descB.Behaviors.Find<HttpMethodAttribute>();
                     int result = descAAttr.Priority.CompareTo(descBAttr.Priority);
                     if (result == 0)
                     {
                         result = Math.Sign(descAAttr.UriSuffix.Length - descBAttr.UriSuffix.Length);
                     }
                     return result;
                 }
             );

             for (int i = operationsWithSuffix.Count-1; i >= 0; i--)
             {
                 OperationDescription opDesc = operationsWithSuffix[i];
                 HttpMethodAttribute methodAttribute = opDesc.Behaviors.Find<HttpMethodAttribute>();
                 // we have a method attribute, the attribute's method value matches
                 // the incoming http request and we do have a regex.
                 Match match = methodAttribute.UriSuffixRegex.Match(baseUriPath);
                 if (match != null && match.Success)
                 {
                     return opDesc.Name;
                 }
             }
            

             /* now check the rest */
             foreach (OperationDescription opDesc in description.Operations)
             {
                 HttpMethodAttribute methodAttribute = opDesc.Behaviors.Find<HttpMethodAttribute>();
                 if (methodAttribute != null && methodAttribute.UriSuffixRegex == null)
                 {
                     // we have a http method attribute and the method macthes the request
                     // method: match
                     if (String.Compare(methodAttribute.Method, msgProp.Method, true) == 0)
                     {
                         return opDesc.Name;
                     }
                 }
                 else if (String.Compare(opDesc.Name, msgProp.Method, true) == 0)
                 {
                     // we do not have a http method attribute, but the method name
                     // equals the http method.
                     return opDesc.Name;
                 }
             }

             // No match so far. Now lets find a wildcard method.
             foreach (OperationDescription opDesc in description.Operations)
             {
                 if (opDesc.Messages.Count > 0 &&
                     opDesc.Messages[0].Action == "*" &&
                     opDesc.Messages[0].Direction == TransferDirection.Incoming)
                 {
                     return opDesc.Name;
                 }
             }
         }

            // No match so far, delegate to the default selector if one is present
         if (defaultSelector != null)
         {
            return defaultSelector.SelectOperation(ref message);
         }
         return "";
      }
   }
}

As you can see, there is only one method: SelectOperation. The method will only do work on its own if the incoming request is an HTTP request received by Indigo’s HTTP transport. We can figure this out by looking into the message properties and looking for the presence of a property with the name HttpRequestMessageProperty.Name. The presence of this property is required, because that’s the vehicle through which Indigo gives us access to the HTTP method that was used for the request. What we’re looking for sits as an instance string property on HttpRequestMessageProperty.Method.

The algorithm itself is fairly straightforward:

1.      We grab all operations whose HttpMethodAttribute.Method property matches (case-insensitively) the incoming HTTP method string and which have a suffix expression and throw them into a list.

2.      We sort the list by the priority of the attributes amongst each other. I introduced the priorities, because I am allowing wildcards here and I want to allow the suffixes /item/detail and /item/* (read: “anything except detail”) to coexist on the same endpoint, but I need a something other than method order to specify that the match on the concrete expression should be done before the wildcard expression. In absence of priorities and/or in the case of collisions, longer suffixes always trump shorter expressions for matching priority.

3.      We match the sorted list in reverse order (higher priority is better) and return the first operation in the list whose suffix expression matches the incoming messages “To” header (which is the same as the HTTP request URI).

4.      If we don’t have a match, we proceed to iterate over all operations that do not have a suffix and see whether we can find a match solely based on the  HttpMethodAttribute.Method value or, if the HttpMethodAttribute is absent, on the plain method name. (So if the method just named “Get” and there is no attribute, an HTTP GET request will still match).

5.      If we still don’t have a match, we look for the common “all messages without a proper home” method with an OperationContract.Action value of “*”.

6.      And as the very last resort we fall back to the default selector if we have been given one and else we fail out by returning an empty string, which means that there is no match at at all.

If we find a match, we return a string that’s the same as the name of the method we want to dispatch to and Indigo will them promptly do the right thing and call the respective method, either by passing the raw message outright (as in my TV app) or by breaking up the message body using the XmlFormatter or the XmlSerializer and passing a typed message or a set of parameters.

Step 4 is noteworthy insofar as that the [HttpMethod] attributes aren’t strictly necessary. If you name your methods exactly like the HTTP methods they should handle, the operation selector will figure this out. If that’s what you want, you don’t even need the [HttpMethodOperationSelector] attribute, if you choose to add that information in the configuration file instead. To enable that. I’ve built the required configuration class that you can register in the <behaviorExtensions> and map to the <behaviors> section of an endpoint’s configuration. The class is very, very simple:

using System;
using System.Collections.Generic;
using System.Text;
using System.ServiceModel.Configuration;

namespace newtelligence.ServiceModelExtensions
{
   public class HttpMethodOperationSelectorSection : BehaviorExtensionSection
   {
      public HttpMethodOperationSelectorSection()
      {
      }

      protected override object CreateBehavior()
      {
         return new HttpMethodOperationSelectorAttribute();
      }

      public override string ConfiguredSectionName
      {
         get
         {
            return "httpMethodOperationSelector";
         }
      }
   }
}

Alright, so where are we? We’ve got dispatch metadata, we’ve got an endpoint dispatch mechanism and we’ve got an operation dispatch mechanism. Furthermore we have a tool that conveniently grabs “parameter segments” from the URI and maps them to an out-of-band collection on the UriArgumentsMessageProperty from where we can conveniently fetch them inside the service implementation.

What we don’t have is POX. We’re still dealing with SOAP messages here. So the next step is to modify the wire encoding in a way that we unwrap the content and throw away the envelope on the way out and that we wrap incoming “raw” data into an envelope to make Indigo happy with incoming requests.

That’s plenty of material for Part 5 and beyond. Stay tuned.

Go to Part 5

Categories: Indigo

Part 1, Part 2

If you’ve read the first two parts of this series, you should know by now (if I’ve done a reasonable job explaining) about the fundamental concepts of how incoming web service messages (requests) are typically dispatched to their handler code and also understand how my Indigo REST/POX extensions are helping to associate the metadata required for dispatching plain, envelope-less HTTP requests with Indigo service operations using the HttpMethod attribute and how the HttpMethodParameterInspector breaks up the URI components into easily consumable, out-of-band parameters that flow into the service code the UriArgumentsMessageProperty.

What I have not explained is how the dispatching is actually done. There are two parts to that story: Dispatching to services on the listener level (which I will cover here) and dispatching to operations at the endpoint level (which I’ll cover in part 4).

When an HTTP request is received on a namespace that Indigo has registered with HTTP.SYS, the request is matched against a collection of “address filters”. “Registering a namespace” means that if you configure a service-endpoint to listen at the endpoint http://www.example.com/foo, the service-endpoint “owns” that URI.

What’s noteworthy is that if you have an Indigo/WCF application listening to endpoints at http://www.example.com/baz, http://www.example.com/foo and http://www.example.com/foo/bar, the demultiplexing (“demuxing” in short) of the requests is done by Indigo and not by the network stack. HTTP.SYS will push requests from any registered URI namespace of the particular application into the “shared” Indigo HTTP transport and leave it up to Indigo to figure out the right endpoint to dispatch to. And that turns out to be perfect for our purposes.

Whenever an incoming message needs to be dispatched to an endpoint, the message is matched against an address filter table. [For the very nosy: The place where it all happens is in the internal EndpointListenerTable class’s Lookup method, which you could probably look at if you had the right tools, but I didn’t say that.]

By default, the address filter that is used for any “regular” service is the EndpointAddressFilter, which reports a match if the incoming message’s “To” addressing header (which is constructed from the HTTP header information if it’s not immediately contained in the incoming message) is a match for the registered URI. Whether a match is found is dependent on the URI’s port and host-name (controllable by the HostNameComparisonMode in the HTTP binding configuration) and the URIs remaining path, which must be an exact match for the registered service endpoint URI. Since we want to introduce a slightly different dispatch scheme that is based on matching not only on the exact endpoint URI’s path but also on suffixes appended to that URI, we must put a hook into the dispatch mechanism and extend the default behavior. If a method marked up with [HttpMethod(“GET”,UriSuffix=”/bar”)] and the endpoint is hosted at http://www.example.com/foo, we want any HTTP GET request to http://www.example.com/foo/bar to be dispatched to that endpoint and, subsequently, to that exact method.

To infuse that behavior into Indigo, we need to tell it so. If you take a look at Part 2 and at the service contract declarations that I posted there, you will notice the HttpMethodOperationSelector attribute alongside the ServiceContract attribute. That attribute class does the trick:

using System;
using System.Collections.Generic;
using System.Text;
using System.ServiceModel;

namespace newtelligence.ServiceModelExtensions
{
   public class HttpMethodOperationSelectorAttribute :
                  Attribute, IContractBehavior, IEndpointBehavior
   {
      public void BindDispatch(
                ContractDescription description,
                IEnumerable<ServiceEndpoint> endpoints,
                DispatchBehavior dispatch,
                BindingParameterCollection parameters)
      {
         dispatch.OperationSelector =
              new HttpMethodOperationSelectorBehavior(description, dispatch.OperationSelector);
         foreach (ServiceEndpoint se in endpoints)
         {
            if (se.Behaviors.Find<HttpMethodOperationSelectorAttribute>() == null)
            {
               se.Behaviors.Add(this);
            }
         }
           
      }

      public void BindProxy(
                     ContractDescription description,
                     ServiceEndpoint endpoint,
                     ProxyBehavior proxy,
                     BindingParameterCollection parameters)
      {

      }

      public bool IsApplicableToContract(Type contractType)
      {
         return true;
      }

      public void BindServiceEndpoint(
                     ServiceEndpoint serviceEndpoint,
                     EndpointListener endpointListener,
                     BindingParameterCollection parameters)
      {
            SuffixFilter suffixFilter = null;

            if (endpointListener.AddressFilter == null ||
                !(endpointListener.AddressFilter is SuffixFilter))
            {
                suffixFilter = new SuffixFilter(endpointListener, endpointListener.AddressFilter);
                endpointListener.AddressFilter = suffixFilter;
                ((Dispatcher)endpointListener.Dispatcher).Filter = suffixFilter;
            }
            else
            {
                suffixFilter = endpointListener.AddressFilter as SuffixFilter;
            }

         foreach (OperationDescription opDesc in serviceEndpoint.Contract.Operations)
         {
            HttpMethodAttribute methodAttribute = opDesc.Behaviors.Find<HttpMethodAttribute>();
            if (methodAttribute != null)
            {
               if (methodAttribute.UriSuffixRegex != null)
               {
                  suffixFilter.AddSuffix(methodAttribute.UriSuffixRegex);
               }
            }
         }
      }
   }
}


In the attribute’s implementation of IEndpointBehavior.BindServiceEndpoint, which is invoked by Indigo as the endpoint is initialized (in response to ServiceHost.Open() ), we replace the service’s default endpoint filter with our own SuffixFilter class. Once we’ve done that, we iterate over the HttpMethodAttribute metadata elements that sit on the individual operations/methods in the contract description (this is the actual reason we put them there, see Part 2) and add any suffix we find to the filter’s suffix table. We’ll get back to this class in the next part while to investigate how the “operation selector” is hooked in; let’s investigate the suffix filter first.

using System;
using System.Collections.Generic;
using System.Text;
using System.Text.RegularExpressions;
using System.ServiceModel;
using System.ServiceModel.Configuration;
using System.ServiceModel.Channels;

namespace newtelligence.ServiceModelExtensions
{
    /// <summary>
    /// This class implements a specialized ServiceModel address filter
    /// that allows matching URL suffixes.
    /// </summary>
    /// <remarks>
    /// The class aggregates an EndpointAddressFilter to helpi with the matching logic.
    /// </remarks>
    public class SuffixFilter : Filter
    {
        /// <summary>
        /// List for the suffixes.
        /// </summary>
        List<Regex> suffixes;
        /// <summary>
        /// Original filter that we delegate to if we can't match with this
        /// one.
        /// </summary>
        Filter originalFilter;
        /// <summary>
        /// The endpoint listener that this filter is applied to
        /// </summary>
        EndpointListener endpointListener;
        /// <summary>
        /// The aggregated endpoint address filter
        /// </summary>
        EndpointAddressFilter addressFilter;

        /// <summary>
        /// Creates a new instance of SuffixFilter
        /// </summary>
        /// <param name="endpointListener">EndpointListener this filter is attached to</param>
        /// <param name="originalFilter">Original AddressFilter of the EndpointListener</param>
        public SuffixFilter(EndpointListener endpointListener, Filter originalFilter)
        {
            this. suffixes = new List<Regex>();
            this. originalFilter = originalFilter;
            this. endpointListener = endpointListener;
        }

        /// <summary>
        /// Implements the matching logic
        /// </summary>
        /// <param name="message">Message that shall be matched</param>
        /// <returns>Returns an indicator for whether the message is considered a match</returns>
        public override bool Match(Message message)
        {
            // Workaround for Nov2006 CTP bug. GetEndpointAddress() cannot be
            // called on an EndpointListener before the listener is running.
            if ( addressFilter == null)
            {
                addressFilter = new EndpointAddressFilter( endpointListener.GetEndpointAddress(), false);
            }

            // check whether we have an immediate match, which means that the message's
            // To Header is an excat match for the EndpointListener's address
            if ( addressFilter.Match(message))
            {
                return true;
            }
            else
            {
                // no direct match. Save the original header value and chop off the
                // query portion of the URI.
                Uri originalTo = message.Headers.To;
                string baseUriPath = originalTo.AbsolutePath;
                string baseUriRoot = originalTo.GetLeftPart(UriPartial.Authority);

                // match against the suffix list
                foreach (Regex suffixExpression in suffixes)
                {
                    Match match = suffixExpression.Match(baseUriPath);
                    if (match != null && match.Success)
                    {
                        string filterUri = baseUriRoot+baseUriPath.Remove(baseUriPath.LastIndexOf(match.Value));
                        message.Headers.To = new Uri(filterUri);
                        if ( addressFilter.Match(message))
                        {
                            message.Headers.To = originalTo;
                            return true;
                        }
                        message.Headers.To = originalTo;
                    }                       
                }
            }
            if ( originalFilter != null)
            {
                // of no match has been found up to here, we match against the
                // original filter if that was provided.
                return originalFilter.Match(message);
            }
            return false;
        }

        /// <summary>
        /// Implements the matching logic by constructing a Message over
        /// a MessageBuffer and delegating to the Match(Message) overload
        /// </summary>
        /// <param name="buffer"></param>
        /// <returns></returns>
        public override bool Match(MessageBuffer buffer)
        {
            Message msg = buffer.CreateMessage();
            bool result = Match(msg);
            msg.Close();
            return result;
        }

        /// <summary>
        /// Adds a new suffix to the suffix table
        /// </summary>
        /// <param name="suffix">Suffix value</param>
        public void AddSuffix(Regex suffix)
        {
            suffixes.Add(suffix);
        }

        /// <summary>
        /// Removes a suffix from the suffix table
        /// </summary>
        /// <param name="suffix"></param>
        public void RemoveSuffix(Regex suffix)
        {
            suffixes.Remove(suffix);
        }
    }
}


The filter’s filet piece is in the Match method. To finally figure out whether a message is a match, we employ the matching logic of the default EndpointAddressFilter, which deals with matching the host names and the “base URI” at which the service was registered. What the suffix filter does in addition is to match the suffix regex pattern against the incoming message’s “To” header and if that is a match, the suffix is stripped and the remaining URI is matched against the aggregated EndpointAddressFilter. Only if we get a match for the suffix and for the remainder URI, we’ll report a positive match back to the infrastructure by returning true. And in that case and only in that case the service endpoint for which “this” suffix filter was installed and populated gets the request.

For each incoming request, Indigo goes through all registered endpoint address filters and asks them whether they want to service it. And that really means “all”. Indigo will refuse to service the request if two or more filters report ownership of the respective request and will throw a traceable (using Indigo tracing) internal exception that will cause none of the services to be picked due to this ambiguity. In the case of overlapping dispatch namespaces, none is indeed better than “any random”.

Next part: HttpMethodOperationSelectorBehavior

Go to Part 4

Categories: Indigo

In the first part of this series, I gave you a little introduction to REST/POX in contrast to SOAP and also explained some of the differences in how incoming requests are dispatched. Now I’ll start digging into how we can teach Indigo a RESTish dispatch mechanism that dispatches based on the HTTP method and by matching on the URI against a suffix pattern.

The idea here is that we have a service implementation that takes care of a certain resource namespace. To stick with the example from Part 1, we assume that the resources managed within this (URI) namespace are customers and data related to customers. Mind that this might not be all data of a respective customer, but that some data may very well be residing in completely different namespaces (and on different servers).

As a reminder: When I write “namespaces” I specifically mean that we’re creating hierarchical scopes for data. All customer representations managed by our imaginary service are subordinates of the namespace http://www.example.com/customers, the representation of the customer identified by customer-id 00212332 occupies the namespace http://www.example.com/customers/00212332, all communication (phone) numbers of that customer are subordinates of http://www.example.com/customers/00212332/comm and the home phone number might be identified by http://www.example.com/customers/00212332/comm/home-phone. However, all orders made by that respective customer might be found somewhere completely different; maybe here:  http://www.tempuri.org/ordersystem/customer-orders/00212332. The data representation of the customer would contain that link, but the customer service would not manage those resources (the orders), at all.

Purists might (and do) argue that plain HTTP handlers (or “servlets”, in Java terms) representing exactly one resource type are the best way to implement the processing logic for this URI/HTTP centric model, but since I am much more a pragmatist than a purist, I prefer using a infrastructure that maps incoming requests to a programming model that’s easy enough for most programmers to deal with. It turns out that a class with methods that deal with related stuff (a customer and his addresses and phone numbers) is something that most programmers can handle pretty well by now and there’s nothing wrong with co-locating related handlers for data from a given data source on one flat interface, even if the outside representation of that data suggests that the data and its “endpoints” are ordered hierarchically. In the end, the namespace organization is just a smokescreen that we put in front of our implementation. Just to make Mark happy, I’ll show a very HTTP and service-per-object aligned contract model and, later in the next part, also a more readable model for the rest of us to explain how the dispatch works. I’ll start with the model for idealists:

[ServiceContract, HttpMethodOperationSelector]
interface
ICustomerResource
{
    [OperationContract,
     HttpMethod("GET", UriSuffix = "/customers/?")]
    Message Get(Message msg);
    [OperationContract,
     HttpMethod("PUT", UriSuffix = "/customers/?")]
    Message Put(Message msg);
    [OperationContract,
     HttpMethod("POST", UriSuffix = "/customers/?")]
    Message Post(Message msg);
    [OperationContract,
     HttpMethod("DELETE", UriSuffix = "/customers/?")]
    Message Delete(Message msg);
}

[ServiceContract, HttpMethodOperationSelector]
interface
ICommunicationResource
{
    [OperationContract,
     HttpMethod("GET", UriSuffix = "/customers/?/comm/?")]
    Message Get(Message msg);
    [OperationContract,
     HttpMethod("PUT", UriSuffix = "/customers/?/comm/?")]
    Message Put(Message msg);
    [OperationContract,
     HttpMethod("POST", UriSuffix = "/customers/?/comm/?")]
    Message Post(Message msg);
    [OperationContract,
      HttpMethod("DELETE", UriSuffix = "/customers/?/comm/?")]
    Message Delete(Message msg);
 }

We have two different contracts here, one for the “customers” namespace and one for the “comm” sub-namespace, and the implementation of these two contracts could be sitting on the same implementation class or on two different classes whereby they could be co-located at the exact same root address or sitting on different machines. All of that doesn’t really matter, since the filtering/dispatch logic we’ll use here will figure out the right thing to do, meaning the right handler method to dispatch to. Also mind that there’s a difference to the examples I show in Part 1 in that I am now using messages.

The UriSuffix of the HttpMethodAttribute serves three purposes. First, it is used to construct a regular expression that is used to match incoming messages to the right endpoint using a custom endpoint Filter. Second, the same regular expression is used to figure out which method the message shall be dispatched on at that endpoint using a custom implementation of IDispatchOperation. Third, the regular expression is also used to isolate the URI-embedded parameters and make them easily accessible on a message property. So for the ICommunicationResource.Get() operation above, the handler implementation would start out as follows and would make the values occurring at the two “?” of the suffix available in the UriArgumentsMessageProperty.InUrlArgs collection that is shown further below:

Message ICommunicationResource.Get(Message msg)
{
   UriArgumentsMessageProperty uriArgs = UriArgumentsMessageProperty.FromOperationContext();
   string customerid = uriArgs.InUrlArgs[0];
   string commid = uriArgs.InUrlArgs[1];
   ...

The expressions for UriSuffix support two different wildcards. The “*”wildcard will match any character and the “?” wildcard will match any character except the forward slash. If you would, for instance, want to build an operation that behaves like a web server and might serve up data from a directory and its subdirectories, you’d use something as global as “/*” and any URI would match the respective endpoint/method. If you want to match/extract segments of a namespace path as we do here, you use the “?”.

And so here is the complete and rather straightforward implementation of the HttpMethodAttribute:

using System;
using System.Collections.Generic;
using System.Text;
using System.ServiceModel;
using System.Text.RegularExpressions;

namespace newtelligence.ServiceModelExtensions
{
    ///
<summary>
    ///
The HttpMethodAttribute is used to declare the HTTP method and
    ///
an optional suffix for the REST/POX extensions to dispatch on. In absence of
    ///
this attribute, the dispatch mechanism will attempt to dispatch on the
    ///
name of the operation and try matching by name it to the HTTP method used.
    ///
</summary>
   [AttributeUsage(AttributeTargets.Method)]
   public class HttpMethodAttribute : Attribute,
IOperationBehavior
   {
      ///
<summary>
      /// Initializes a new instance of the <see cref="T:HttpMethodAttribute"/>
class.
      ///
</summary>
      /// <param name="method">The method.
</param>
      public HttpMethodAttribute(string method)
      {
         _Method = method;
      }

      private string _Method;
      ///
<summary>
      ///
Gets the HTTP method. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
      ///
</summary>
      /// <value>The method.
</value>
      public string Method
      {
         
get
         {
            return _Method;
         }
      }

      private string _UriSuffix;
      ///
<summary>
      ///
Sets or gets the Uri suffix. The Uri suffix is an expression used
      ///
to match an incoming request against the endpoint and this method.
      ///
The UriSuffix may contain two different wildcards: '*' matches the
      ///
shortest sequence of arbitrary character up to the next concrete character
      ///
in the suffix string. The '?' behaves similarly, but excludes the '/'
      ///
character from matching.
      ///
</summary>
      /// <value>The URI suffix.
</value>
      public string UriSuffix
      {
         
set
         {
             _UriSuffix = value;
             _UriSuffixRegex = new Regex(
                    Regex.Escape(_uriSuffix).Replace("\\*", "(.*?)")
                                            .Replace("\\?", "([^/]*?)") + "$",
                    RegexOptions.CultureInvariant |
                    RegexOptions.IgnoreCase |
                    RegexOptions.Singleline |
                    RegexOptions.Compiled);
         }
           
get
            {
                return _UriSuffix;
            }
      }

        private Regex _UriSuffixRegex;
        ///
<summary>
        ///
Gets the regular match expression constructed from
        ///
the UriSuffix.
        ///
</summary>
        /// <value>The URI suffix regex.
</value>
        public Regex UriSuffixRegex
        {
           
get
            {
                return _UriSuffixRegex;
            }
        }

        private int _Priority;
        ///
<summary>
        ///
Gets or sets the priority. The priority is used to control the
        ///
order in which the suffix expressions are processed when matching.
        ///
A higher priority causes the expression to be matched earlier.
        ///
</summary>
        /// <value>The priority.
</value>
        public int Priority
        {
           
get
            {
                return _Priority;
            }
           
set
            {
                _Priority = value;
            }
        }

        ///
<summary>
        ///
Applies the behavior.
        ///
</summary>
        /// <param name="description">Description.
</param>
        /// <param name="proxy">Proxy.
</param>
        /// <param name="parameters">Parameters.
</param>
      public void ApplyBehavior(OperationDescription description,
                ProxyOperation proxy, BindingParameterCollection parameters)
      {
           
// do nothing proxy-side
      }

        ///
<summary>
        ///
Applies the behavior.
        ///
</summary>
        /// <param name="description">Description.
</param>
        /// <param name="dispatch">Dispatch.
</param>
        /// <param name="parameters">Parameters.
</param>
      public void ApplyBehavior(OperationDescription description,
                    DispatchOperation dispatch, BindingParameterCollection parameters)
      {
           
// We're adding a parameter inspector that parses the Uri parameters into
           
// an UriArgumentsMessageProperty
            dispatch.ParameterInspectors.Add( new HttpMethodParameterInspector(this));
      }
   }
}


Of course this attribute is a bit special. You might have noticed that it implements the IOperationBehavior interface with its two method overloads of ApplyBehavior whereby one is for the proxy side of a channel (which we don’t care about in this case) and the other for the dispatcher (service-) side of an implementation. The presence of this interface causes the Indigo runtime to instantiate the attribute and add it to the contract metadata whenever the contract is built using reflection. You could also instantiate the attribute yourself and add it to any existing in-memory Indigo contract’s operation description if you liked. This is a convenient way to get the additional metadata into the contract description because we need it at several places later.

At the dispatch-side we’re also adding an implementation of IParameterInspector, whose job it is to extract the URI-embedded arguments and also parse “key=value” pairs of an optional query string, if one is present.

Parameter inspectors are called immediately before and after a method has been invoked and, as their name implies, are meant to be used to inspect a method’s parameters and/or output. However, their use is not restricted to that. Because the operation context is also available when the inspectors are called, you can also inspect headers, properties or any other context information at this point.

Even though this is largely a convenience feature not central to the dispatcher, I’ll show the class here, because I mentioned the UriArgumentsMessageProperty above. This is where it gets populated and set:

using System;
using System.Collections.Generic;
using System.Text;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.Text.RegularExpressions;
using System.Collections.Specialized;

namespace newtelligence.ServiceModelExtensions
{
    class HttpMethodParameterInspector :
IParameterInspector
    {
        HttpMethodAttribute _metadata;
        ///
<summary>
        ///
Creates a new instance of HttpMethodParameterInspector
        ///
</summary>
        public HttpMethodParameterInspector(HttpMethodAttribute metadata)
        {
            _metadata = metadata;
        }
       
        #region IParameterInspector Members

        public object BeforeCall(string operationName, object[] inputs)
        {
            OperationContext opCtx = OperationContext.Current;
            NameValueCollection queryArguments;
            StringCollection inUrlArgs = new StringCollection();
            Uri toHeader = opCtx.IncomingMessageHeaders.To;

           
// Parse query strings
            if (opCtx.IncomingMessageProperties.ContainsKey(HttpRequestMessageProperty.Name))
            {
                HttpRequestMessageProperty rqMsgProp =
                   opCtx.IncomingMessageProperties[HttpRequestMessageProperty.Name]
                      as HttpRequestMessageProperty;
                queryArguments = UriHelper.ParseQueryString(rqMsgProp.QueryString);
            }
           
else
            {
                queryArguments = UriHelper.ParseQueryString(toHeader.Query);
            }

           
// Get the in URL arguments from the regex captures
            Match match = _metadata.UriSuffixRegex.Match(toHeader.AbsolutePath);
            if (match.Success && match.Groups.Count > 0)
            {
               
// if we have more than 1 capture group (the first is always the
                
// full match expression), we store the next groups in the
               
// irUrlArgument collection in order of occurrence.
                for (int i = 1; i < match.Groups.Count; i++)
                {
                    inUrlArgs.Add(match.Groups[i].Value);
                }
            }

            opCtx.IncomingMessageProperties.Add(
                UriArgumentsMessageProperty.Name,
                new UriArgumentsMessageProperty(queryArguments,inUrlArgs)
            );

            return null;
        }

        public void AfterCall(string operationName, object[] outputs,
                      object returnValue, object correlationState)
        {
           
// do nothing
        }      

        #endregion
    }
}

The query string is parsed using a helper class which splits the query string at each ‘&’ first and then splits the segments at ‘=’ and throws the resulting pairs into a NameValueCollection. That is no big deal and I'll omit the helper here. The UriArgumentsMessageProperty that gets put into the mesage properties is just a class that holds the query arguments and the string collection with the regex matches. It is quite trivial:

using System;
using System.Collections.Specialized;
using System.Collections.Generic;
using System.Text;
using System.ServiceModel;

namespace newtelligence.ServiceModelExtensions
{

    ///
<summary>
    ///
Message property class holding arguments fro the request URI.
    ///
</summary>
    public class
UriArgumentsMessageProperty
    {
        NameValueCollection _queryArguments;
        StringCollection _inUrlArgs;

        ///
<summary>
        ///
Creates a new instance of UrlArgumentsMessageProperty
        ///
</summary>
        internal UriArgumentsMessageProperty(NameValueCollection queryArguments,
                                             StringCollection inUrlArgs)
        {
            _queryArguments = queryArguments;
            _inUrlArgs = inUrlArgs;
        }

        ///
<summary>
        ///
Gets the query arguments.
        ///
</summary>
        /// <value>The query arguments.
</value>
        public NameValueCollection QueryArguments
        {
           
get
            {
                return _queryArguments;
            }
        }

        ///
<summary>
        ///
Gets the arguments in the URL.
        ///
</summary>
        /// <value>The in URL args.
</value>
        public StringCollection InUrlArgs
        {
           
get
            {
                return _inUrlArgs;
            }
        }

        ///
<summary>
        ///
Gets the name of the property.
        ///
</summary>
        /// <value>The name.
</value>
        public static string Name
        {
           
get
            {
                return "urlArguments";
            }
        }

        ///
<summary>
        ///
Retrieves the message property from the current operation context.
        ///
</summary>
        ///
<returns></returns>
        static public UriArgumentsMessageProperty FromOperationContext()
        {
            return FromOperationContext(OperationContext.Current);
        }

        ///
<summary>
        ///
Retrieves the message property from the operation context.
        ///
</summary>
        /// <param name="operationContext">operation context
</param>
        ///
<returns></returns>
        static public UriArgumentsMessageProperty
                 FromOperationContext(OperationContext operationContext)
        {
            return operationContext.IncomingMessageProperties[Name]
                      as UriArgumentsMessageProperty;
        }
    }
}


Ok, fine, and how is that now really used for dispatching? You’ll see that when I explain the SuffixFilter and the HttpMethodOperationSelectorBehavior in the next parts.

(There will also be a downloadable version of the code later, but I am still tweaking some little things and don’t want to keep updating)

Go to Part 3

Categories: Indigo

A not so long time ago in a land far away…

A little bit more than half a year ago I got invited to a meeting at Microsoft in Redmond and discussed with Steve Swartz, Yasser Shohoud and Eugene Osovetsky how to implement POX and REST support for Indigo. You could also say that Steve dragged me into the meeting, since I happened to be on campus anyways and was burning some time in Steve’s office. I am not sure whether I made any good contribution to the cause in the meeting, but at least I witnessed the definition a special capability for the HTTP transport that I am exploiting with a set of Indigo extensions that I’ll present in this series of blog posts. The consensus in the meeting was that the requirements for building POX/REST support into the product weren’t generally clear enough in the sense that when you ask 100 people in the community you get 154 ever-changing opinions about how to write such apps. As a consequence it would not really be possible to define a complete programming model surface that everyone would be happy with, but that a simple set of hooks could be put into the product that people could use to build programming models rather easily.

And so they did, and so I did. This new capability of the HTTP transport first appeared in the September CTP of Indigo/WCF and surfaces to the developer as properties in the Message class Properties collection or the OperationContext.Incoming/OutgoingMessageProperties.

If you are using the Indigo HTTP transport on the server, the transport will always stick a HttpRequestMessageProperty instance into the incoming message properties, which provides access to the HTTP headers, the HTTP method (GET, POST, PUT, etc.) and the full query string. On the client, you can create an instance of this property class yourself and stick it into any outbound message’s properties collection  and, with that, control how the transport performs the request. For sending replies from the server, you can put a HttpResponseMessageProperty into the message properties (or, again, into the OperationContext) and set the HTTP status code and description and of course the HTTP reply headers.  

And since I have nothing better to do, I wanted to know whether this rather simple control feature for the HTTP transport would indeed be enough to build a POX/REST programming model and application when combined with the rest of the Indigo extensibility features. Executive Summary: Yes.

Now, the “yes” doesn’t mean that the required extensions write themselves in an hour. It’s a bit of work to get there, so it’ll take me a bit of writing (and a few days) to tell the whole story. There are two ways to approach the explanation: “Bottom-up” where I’d start at the wire level and show how to have Indigo send and accept any (and I mean any: 8GB+ video/mpeg files included) stuff that isn’t wrapped in SOAP envelopes or “Top-Down” where I start at the programming model surface and explain the application-level developer experience first. It’s a tough call, but I’ve decided to start with the latter.

REST and POX

In a nutshell – and I am sure that I’ll now get 153 corrections by comments and email from the other 99 people – the core ideas behind REST (representational state transfer) are that it builds on the pervasive HTTP web-architecture, that every item (“resource”) in the whole wide world can/could be identified by a HTTP URL and that HTTP has sufficient built-in methods to manipulate such resources. You can GET a data representation of the item, you can POST a new, related resource representation underneath an existing resource, you can PUT updates to resource representations and you can of course DELETE a resource representation. Every representation of a resource may have links to related resources that you can likewise access using these methods and therefore you get a web of information. REST is an architectural generalization of the HTML/HTTP web and the proponents of REST argue that the web works very obviously very well and that therefore REST is fit for any type of application. Of course, that’s a bit of an exaggeration, because there are plenty of scenarios where request/response isn’t the thing to do, but for a lot of applications REST may just be a good choice. Commonly, the REST architectural style is combined with XML data representations, even though this architectural style is really content-neutral. The place to find a “link” in a resource represented by a JPEG photo could very well be a URL on somewhere an advertising poster in the background behind your mother-in-law wearing a horrible hat at an English horse-racing event.    

POX means “plain old XML” and I’ve also heard a definition saying that “POX is REST without the dogma”, but that’s not really correct. POX is not really well defined, but it’s clear that the “plain” is the focus and that means typically that folks who talk about “POX web services” explicitly mean that those services don’t use SOAP. You could see POX as an antonym for SOAP, if you will.

Except for “this isn’t SOAP” the term POX means very little. POX isn’t about architecture, it’s only about content. But REST is about architecture and it is content neutral. So what you commonly find is the combination of REST/POX as an alternative model to SOAP and the WS-* web services stack.

SOAP vs. REST/POX

Indigo is a messaging system that deals with messages that are represented by an envelope with headers and a body: SOAP. Indigo is so soaked with SOAP that you might just be able to wash your hands with a WinFX SDK CD. Still, the fact that Indigo is using an information model that is aligned with SOAP means nothing for what goes on or comes from the wire as I will show you later.

Fundamentally, the reason for why anybody might be using SOAP is that it gives you two places to stick things: A header section for metadata and a body section for payload. The headers are important if you need to communicate addressing, security or other information independent from a transport and across multiple communication hops. HTTP does have an extensible headers model, but other messaging transport options don’t. So to make things consistent across all sorts of transports and to provide an abstraction away from the transport specifics, all metadata needed to establish communication is stuck into an envelope alongside the payload. If you look at SOAP and WS-Addressing combined, you can find that the information content is really not much more than that of an IP packet (yes, IP=Internet Protocol). SOAP/WS-Addressing provide an abstraction for routing packages over any sort of transport just as much as IP provided an abstraction over Ethernet, Token Ring, ArcNet and so forth The problematically-named WS-ReliableMessaging is the TCP equivalent, by the way.

But: If you’ve decided that HTTP is all you need and REST/POX is the way to go for your application, you are apparently happy with a request/response model, you don’t need routing or reliable delivery, transport/app-protocol level security is sufficient, and you don’t need to have an abstraction of the HTTP headers and HTTP methods. In that case, the features that SOAP and the WS-* stack give you could be considered redundant. Let’s assume that’s so and try to eliminate SOAP out of the equation.

Figuring out a programming model

Incoming SOAP messages can be dispatched to handlers, typically methods, in two (and a half) different ways. The first (and a half) and most common option is to dispatch based on the value of the SOAPAction: (SOAP 1.1) HTTP header or the “action” media type parameter (SOAP 1.2). Alternatively, some stacks dispatch on the value of the WS-Addressing wsa:Action header, if present. The second way to dispatch messages it to look at the immediate child element of the soap:Body and to associate that content with a handler. The mapping rules are also spelled out in WS-Addressing. Indigo, like ASP.NET web services, uses the first method of dispatching, because that doesn’t require touching the message content.

Now, if we want to do away with SOAP and only want to kick plain old XML documents or even raw binary data around, we have a bit of a problem. Since there are neither any of the HTTP header hints, nor do we have a WS-Addressing wsa:Action header that Indigo could at, its got no information that it could use to dispatch the incoming request on. Even worse: If the incoming request is just an HTTP GET with no entity-body, at all, there’s really nothing to look at except, well, the HTTP method and the URI.

But let’s take a step back and look at this Indigo service contract (I spare you the WSDL) that makes any REST/POX person cringe and shout “This is RPC!”:

[ServiceContract]
interface ICustomerInfo
{
    [OperationContract]
    CustomerInfo GetCustomerInfo(string customerKey);
    [OperationContract]
    void UpdateCustomerInfo(CustomerInfo info);
    [OperationContract]
    void DeleteCustomerInfo(string customerKey);
}

And, yes, this is very RPC like and I could use this exact contract with Enterprise Services or Remoting. It’s also perfectly fine for SOAP Web Services. The action values that Indigo uses for its dispatcher are, because we don’t specify any overrides in the attributes, derived from the method names combined with the port type name (here implicitly ICustomerInfo) and the contract namespace (here implicitly http://tempouri.org). So for this plain definition the action value for the operation GetCustomerInfo() is http://tempuri.org/ICustomerInfo/GetCustomerInfo. An incoming message with that exact URI as its action identifier is mapped to the operation implementation of the GetCustomerInfo() method.

However, without SOAP this dispatch strategy no longer works. Let’s assume from here on that the only thing that goes onto the wire are plain XML documents and not SOAP envelopes. We only send and receive XML payload with nothing around it. (You’ll get to see the “how to” later in this series).

Also, if you were a REST/POX person, you’d rightfully say that there is redundancy in this contract for two reasons. First, because HTTP is an application protocol (Mark Baker, one of the most visible REST proponents, keeps reminding the public about this) and it already defines methods for “Get”, “Update” (PUT), and “Delete”. With SOAP web services, everything is typically POSTed and so it’s effectively tunneling all sorts of semantics through what’s supposed to be the create semantics of HTTP and that makes purists fundamentally unhappy. Second, the “customerKey” for identifying the object is redundant because identifying the resource you want to modify or query is the job of the URL. (Note that I intentionally leave “Create” (POST) out of this for the moment. We’ll get back to that later.)

A more RESTish and HTTP aligned contract definition could look like this:

[ServiceContract]
interface ICustomerResource
{
    [OperationContract]
    CustomerInfo Get();
    [OperationContract]
    void Put(CustomerInfo info);
    [OperationContract]
    void Delete();
}

Now we assume, for a moment, that every customer in the system had its own HTTP service endpoint. If you have a million customers, you have a million endpoints, probably looking like this: http://www.example.com/myapp/customers/00212332. Each of these endpoints has an implementation of the shown interface, representing the resource.

Each of these million services already knows the customer key when a call reaches its endpoint, because each endpoint represents exactly one customer. Therfore we don’t have to pass any parameters to the service for Get or Delete. Get returns the data representation of the service endpoint’s very own, exclusively assigned customer. Following the same logic, the CustomerInfo record that we pass as an argument to Put() (Update) doesn’t need to contain an identifier. This new contract definition would also have sufficient metadata for a dispatcher extension for Indigo that we need to replace the SOAP action dispatcher, because we could map the incoming HTTP method directly to the respective handler by doing a (case-insensitive) string compare to the method name on the interface. HTTP GET maps to Get(), HTTP POST maps to Post(), and so forth.

I think that you probably even could create a million endpoints, but of course such an application would be a complete pig. I haven’t tried and I don’t think you should. So we should find a way to optimize the million endpoints down to one endpoint and from an implementation standpoint it would be rather useful if one service implementation could deal with multiple resource types. That means that we might want to have multiple GET or DELETE methods sitting on the same interface but handling different resources. That aside, “Post” and “Put” are not immediately intuitive names for “Create” and “Update” so that it’d be nice to decouple the application’s operation names from the HTTP method names.

To satisfy all of these requirements, I am adding a little additional metadata to this contract declaration:

[ServiceContract]
interface IMyApp
{
    [OperationContract, HttpMethod("GET",UriSuffix="/customers/*")]
    CustomerInfo GetCustomerInfo();
    [OperationContract, HttpMethod("PUT", UriSuffix = "/customers/*")]
    void UpdateCustomerInfo(CustomerInfo info);
    [OperationContract, HttpMethod("DELETE", UriSuffix = "/customers/*")]
    void DeleteCustomerInfo();
}

The HttpMethodAttribute that I have written which I’ll show and explain in more detail in the next post, has a mandatory argument method which is the HTTP method that the operation handles. Along with that I’ll show an Indigo address filter and an Indigo operation selector that plug into the Indigo dispatch engine and perform the mapping for the incoming request to the correct handler based on the comparison of the HTTP method value and the attribute’s method value.

The optional, named attribute property UriSuffix allows narrowing the match to a namespace. If an implementation of this contract were hosted at http://www.example.com/myapp, only HTTP GET requests made on http://www.example.com/myapp/customers/ or any sub-path of that URL (the ‘*’ acts as wildcard) would be dispatched to GetCustomers(). Inside the method it is then very easy (String.LastIndexOf(‘/’)) to parse out the customer identifier from the URL that can be retrieved from the http request message property or the incoming message’s header collection (Indigo maps a set of HTTP transport information items to message headers if you ask it to). So if we were extending this service to also manage a set of phone/fax/mobile numbers for the customer, we could do it this way:

[ServiceContract]
interface IMyApp
{
    [OperationContract, HttpMethod("POST", UriSuffix = "/customers/*/comm/*")]
    void CreateCommunicationNumber(CommNoInfo commNoInfo);
    [OperationContract, HttpMethod("GET", UriSuffix = "/customers/*/comm/*")]
    CommNoInfo GetCommunicationNumber();
    [OperationContract, HttpMethod("PUT", UriSuffix = "/customers/*/comm/*")]
    void UpdateCommunicationNumber(CommNoInfo info);
    [OperationContract, HttpMethod("DELETE", UriSuffix = "/customers/*/comm/*")]
    void DeleteCommunicationNumber();
    [OperationContract, HttpMethod("GET",UriSuffix="/customers/*")]
    CustomerInfo GetCustomerInfo();
    [OperationContract, HttpMethod("PUT", UriSuffix = "/customers/*")]
    void UpdateCustomerInfo(CustomerInfo info);
    [OperationContract, HttpMethod("DELETE", UriSuffix = "/customers/*")]
    void DeleteCustomerInfo();
}

With the support for multiple namespaces we can create a neat, hierarchical external representation of our data whereby each data element has its individual URL. The fun part is that the application-level implementation does not differ greatly from what you would do in a “normal” app. The “magic” sits in the infrastructure. Sticking with our example, the home phone number data bit for a customer might be retrievable or manipulated here: http://www.example.com/myapp/customers/00212332/comm/home-phone.

Mind that for all examples I’ve shown here, I made the implicit suggestion that we’d use XML serialization. That’s really only for illustrative purposes. It turns out that a lot of the REST/POX proponents are also defenders of the pure beauty of angle brackets and therefore I’ll switch to a pure XML message model with the next post, because that is indeed quite a bit more flexible for this particular application style as you’ll see.

Go to Part 2

Categories: Indigo

December 7, 2005
@ 11:22 PM

Back to blogland. Looking back at this year, i have hardly blogged at all. Partly because I wad too busy and partly because I just had better things to do with my free time. Anyways, in the upcoming weeks I'll write about the things that I've been quietly building in the past half year or so and also dig into and publish stuff from my code archive where I still have some gems laying around that should really be published before they get totally useless. I even have a very cool NETFX 2.0 update for this here.

Part of what I am going to blog about and explain in quite some detail is the (code-named) "Clemens TV" project, which I keep working on. As things stand right now, there are so many variables and configuration issues with getting this to work for everyone (or "anyone but myself" for starters) that it doesn't seem feasible from a support perspective to make all of it public in the same way as I did with dasBlog. Instead, I'll publish a framework that allows hooking in all sorts of (self-written) live TV providers into a common (Indigo/WCF) server app. I will publish a provider for public web streams.

However, if you happen to use SnapStream's Beyond TV and have an additional Beyond TV Link license (that's required for the Beyond TV provider for the app), you have at least one software encoding TV card (hardware encoding cards won't work for web streams), and you have a connection with at least 256 KBps upstream, drop me a line to clemensv@newtelligence.com and I'll put you on a short list for those folks who might get the provider for testing (and to keep) once I am happy with it. We'll see where we go from there.

That said, the application is only partially about TV. It's a showcase demonstrating that Indigo is not only about pushing SOAP envelopes around. I am sending RSS, ASX, OPML and multi-gigabyte, restartable MPEG downloads through Indigo channels and all the receiving application sees is a plain old data stream or plain old XML (nicknamed POX). And when I want to record a show I send an HTTP POST to an endpoint to update the episode details and add a recording and when I want to cancel the recording I send an HTTP DELETE to remove the recording job. That smells like REST. I am sure Mark Baker will dig Indigo once he sees my set of ServiceModel extensions ;-)

Anyways, this is just a "heads up" that it's probably worth looking this direction in the upcoming weeks, no matter whether you are checking out Indigo today or are doing stuff with shipping technologies such as Enterprise Services.

Categories: Indigo

March 17, 2005
@ 01:22 AM

The Indigo bits are out at MSDN Subscriber downloads. Go get them and start playing.

  Tools, SDKs and DDKs,
      Platform Tools, SDKs, DDKs
          WinFX SDK – Community Technology Preview
              Avalon and Indigo Community Technology Preview - March 05 (English)

Categories: Indigo

I’ll write a few more parts of my little Indigo series next weekend (too busy during the week), and will move from “throw arbitrary XML on the wire” to typed messages. However, before I’ll do so, I am curious about your opinion and I am asking you to comment (on the blog-site) on which of the following two declarations you would prefer.

I should probably quickly explain a few things before I let you look at the code snippets: [DataContract] attribute essentially replaces [Serializable] for Indigo and is used to label classes than can be serialized by the System.Runtime.Serialization infrastructure into XML or into a binary representation. So the serialization control through attributes is unified and independent of the actual output flavor you choose at runtime. The [DataMember] attribute labels fields or properties that are part of the data contract and should be (de)serialized. Unlike the current serialization models of Remoting (System.Runtime.Remoting.Formatters) and the XML Serializer (System.Xml.Serialization) where the serializers grab anything public, this model is strictly opt-in, meaning that public fields and properties do not get serialized unless you explicitly label them with [DataMember]. Even more surprising, the new serialization infrastructure does work with fields that are private.

I have a clear preference for one of these two declarations and have also what I think to be a solid explanation for why I prefer it, but before I elaborate, I am interested in your opinion.

Version A

[DataContract]
public partial class Address
{
    [DataMember("Company")]
    private string company;
    [DataMember("RecipientName")]
    private string recipientName;
    [DataMember("AddressLine1")]
    private string addressLine1;

    ... more fields ...

    public string Company
    {
        get { return company; }
        set { company = value; }
    }
   
    public string RecipientName
    {
        get { return recipientName; }
        set { recipientName = value; }
    }
   
    public string AddressLine1
    {
        get { return addressLine1; }
        set { addressLine1 = value; }
    }

    ... more properties and methods and stuff ...
}

 Version B

[DataContract]
public partial class Address
{
    private string company;
    private string recipientName;
    private string addressLine1;

    ... more fields ...

    [DataMember("Company")]
    public string Company
    {
        get { return company; }
        set { company = value; }
    }

    [DataMember("RecipientName")]
    public string RecipientName
    {
        get { return recipientName; }
        set { recipientName = value; }
    }
    [DataMember("AddressLine1")]
    public string AddressLine1
    {
        get { return addressLine1; }
        set { addressLine1 = value; }
    }

    ... more properties and methods and stuff ...
}

Consider this obvious statement: The class is declared in this way to provide programmatic access to and encapsulation of data that will eventually be serialized into some wire format or deserialized from a wire format.

Categories: Indigo

[Read Part 1 and Part 2 first]

Like with parts 1 and 2, I’ll stick with the “this isn’t RPC” theme for this 3rd part of this little series and will show how to flow free form XML from and to services. However, I will drop the “client”/”server” nomenclature from here on and will talk about endpoints. If you look at the contract below (along with the following explanation, of course), you’ll quickly figure out why – both parties in the “buyer”/”seller” conversation I am declaring in the contract below, act as client and as server at the same time.

In contrast to the previous two examples, I am not using the raw Message class, but I move one notch up on the messaging stack and use the XmlSerializer formatting mode for Indigo, which allows me to flow the contents of an XmlNode between services just like it can be done today with ASP.NET Web Services. In addition, I show how custom message headers can be declared and flowed with (really: inside) messages. But first things first:

The snippet below declares one contract (!) with two endpoint service contracts. One endpoint defines the “seller” side and the other defines the “buyer” side of a duplex conversation that two service implementations will have about a (simplified) purchasing process. It also defines an application-specific (SOAP-) header that is used to flow the purchasing process identifier between the parties. That identifier can be used to locate the process state from disk or from some in-memory location at either side as the conversation progresses.

The seller-side service contract is defined through the ISeller interface that is appropriately labeled with a [ServiceContract] attribute and the buyer-side likewise defined through the IBuyer interface. The fusion of these two interfaces into what is effectively a single contract is established by mutually linking both interfaces by setting the respective CallbackContract property of the [ServiceContract] attribute to the respective other interface type. I highlighted the two places where that’s being done.

When I say “one contract”, that is not really true on the WSDL level. In WSDL, both interfaces would indeed be represented as independent contracts. (Which goes to show that WSDL isn’t really “the contract”, but represents just a subset of the complete metadata model).

Each operation in these contracts is labeled with an [OperationContract] attribute that defines the message flow as IsOneWay=true. That’s so because in a duplex conversation, messages flow always unidirectionally and the receiver answers not by “returning a result”, but rather by sending a message (or multiple messages) to the other party’s endpoint. All operation contracts also define the operation style to be DocumentBare, which means that the infrastructure will not auto-generate body wrapper elements.

Instead, each operation defines its own body wrapper by flagging the XmlNode typed argument for the message content with a [MessageBody] attribute and assigning an appropriate name to it.  Above the XmlNode content argument, you can see how the custom header PurchaseProcessHeader is specified for each operation. Custom headers are flagged with the [MessageHeader] attribute and therefore flow in the soap:Header section of the message.

using System;
using System.Collections.Generic;
using System.ServiceModel;
using System.Runtime.Serialization;
using System.Xml;
using System.Xml.Serialization;

namespace DuplexMessagingConversation
{
    [XmlRoot(Namespace = PurchaseProcessHeader.NamespaceURI)]
    [XmlType(Namespace = PurchaseProcessHeader.NamespaceURI)]
    public class PurchaseProcessHeader
    {
        public const string NamespaceURI="urn:newtelligence-com:indigosamples:purchasing";
        public const string ElementName="PurchaseOrder";

        private string orderIdentifier;
       
        public string OrderIdentifier
        {
            get { return orderIdentifier; }
            set { orderIdentifier = value; }
        }
    }

    [ServiceContract(Namespace = "urn:newtelligence-com:indigosamples:seller",
                     Session = false,
                     CallbackContract = typeof(IBuyer),
                     FormatMode = ContractFormatMode.XmlSerializer)]
    interface ISeller
    {
        [OperationContract(IsOneWay=true,IsInitiating=true,
                           Style=ServiceOperationStyle.DocumentBare)]
        void HandlePurchaseOrder(
            [MessageHeader(Name=PurchaseProcessHeader.ElementName,
                           Namespace=PurchaseProcessHeader.NamespaceURI)]
            PurchaseProcessHeader process,
            [MessageBody(Name="PurchaseOrderMessage")]
            XmlNode purchaseOrder);

        [OperationContract(IsOneWay = true, IsInitiating = false,
                           Style = ServiceOperationStyle.DocumentBare)]
        void HandlePaymentNotification(
            [MessageHeader(Name = PurchaseProcessHeader.ElementName,
                           Namespace = PurchaseProcessHeader.NamespaceURI)]
               PurchaseProcessHeader process,
            [MessageBody(Name = "PaymentNotificationMessage")]
               XmlNode paymentNotification);

        [OperationContract(IsOneWay = true, IsInitiating = false, IsTerminating = true,
                           Style = ServiceOperationStyle.DocumentBare)]
        void HandleShippingConfirmation(
            [MessageHeader(Name = PurchaseProcessHeader.ElementName,
                           Namespace = PurchaseProcessHeader.NamespaceURI)]
            PurchaseProcessHeader process,
            [MessageBody(Name = "ShippingConfirmationMessage")]
            XmlNode shippingConfirmation);
    }

    [ServiceContract(Namespace="urn:newtelligence-com:indigosamples:buyer",
                     Session = false,
                     CallbackContract = typeof(ISeller),
                     FormatMode=ContractFormatMode.XmlSerializer)]
    interface IBuyer
    {
        [OperationContract(IsOneWay = true, IsInitiating = true,
                           Style = ServiceOperationStyle.DocumentBare)]
        void HandlePurchaseOrderConfirmation(
            [MessageHeader(Name = PurchaseProcessHeader.ElementName,
                           Namespace = PurchaseProcessHeader.NamespaceURI)]
            PurchaseProcessHeader process,
            [MessageBody(Name = "PurchaseOrderConfirmationMessage")]
            XmlNode purchaseOrderConfirmation);

        [OperationContract(IsOneWay = true, IsInitiating = false,
                           Style = ServiceOperationStyle.DocumentBare)]
        void HandleInvoice(
            [MessageHeader(Name = PurchaseProcessHeader.ElementName,
                           Namespace = PurchaseProcessHeader.NamespaceURI)]
            PurchaseProcessHeader process,
            [MessageBody(Name = "InvoiceMessage")]
            XmlNode invoice);

        [OperationContract(IsOneWay = true, IsInitiating = false, IsTerminating = true,
                           Style = ServiceOperationStyle.DocumentBare)]
        void HandleShippingNotification(
            [MessageHeader(Name = PurchaseProcessHeader.ElementName,
                           Namespace = PurchaseProcessHeader.NamespaceURI)]
            PurchaseProcessHeader process,
            [MessageBody(Name = "ShippingNotificationMessage")]
            XmlNode shippingNotification);
    }
}

To illustrate the effect of these declarations on the wire (I will spare you the XSD/WSDL goop), I’ll show an sample message (grabbed from the debugger) as it can be seen at the ISeller endpoint’s HandlePurchaseOrder operation when it arrives.    

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope"
           
xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing"
           
xmlns:r="http://schemas.xmlsoap.org/ws/2005/01/rm">
    <
s:Header>
        <
a:Action s:mustUnderstand="1">
            urn:newtelligence-com:indigosamples:seller/ISeller/HandlePurchaseOrder
        </a:Action>
        <
h:PurchaseOrder xmlns="urn:newtelligence-com:indigosamples:purchasing"
                        
xmlns:h="urn:newtelligence-com:indigosamples:purchasing"
                        
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                        
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
            <
OrderIdentifier>1234567890</OrderIdentifier>
        </
h:PurchaseOrder>

        <
r:Sequence s:mustUnderstand="1">
            <
r:Identifier>uuid:b99041bf-fab8-45dd-9235-0909d9c61d04;id=2</r:Identifier>
            <
r:MessageNumber>1</r:MessageNumber>
        </
r:Sequence>
        <
a:From>
            <
a:Address>net.tcp://localhost/buyer/reply/e01289a8-424f-4e1a-bba5-b3fb7c92a023</a:Address>
        </
a:From>
        <
a:To s:mustUnderstand="1">net.tcp://localhost/seller</a:To>
    </
s:Header>
    <
s:Body>
        <
PurchaseOrderMessage xmlns="urn:newtelligence-com:indigosamples:seller">
            <
Order xmlns="">...</Order>
        </
PurchaseOrderMessage>

    </
s:Body>
</
s:Envelope>

So … having the contract declaration in place, we can build the service. With your knowledge from the previous parts of this series, the seller side is (almost) straightforward to implement. I create a SellerService supporting the defined ISeller interface and write all operations (methods) in a similar fashion. First I dump out the content of the incoming message and an artificial instance identifier I use to play with instancing.  The only “magic” is in how I obtain the callback channel that I need to be able to send my answers to the other side. To be precise, the magic isn’t mine, it’s sitting inside Indigo. The call  IBuyer buyer = OperationContext.Current.GetCallbackChannel<IBuyer>() yields a ready-to-use channel that is properly configured and bound to the “other side”. Having that in hands, I cook up an answer (or two, or none, as you can see below) and send that to “the buyer”. The hosting class and the service host are standard fare.

using System;
using System.Xml;
using System.ServiceModel;
using System.Runtime.Serialization;

namespace DuplexMessagingConversation
{
    [ServiceBehavior(InstanceMode = InstanceMode.PrivateSession)]
    class SellerService : ISeller
    {
        Guid instanceId = Guid.NewGuid();

        public void HandlePurchaseOrder(PurchaseProcessHeader process, XmlNode data)
        {
            Console.WriteLine("Seller: Purchase Order Received\n\t{0}\n\tInstance {1}",
                              data.OuterXml, instanceId);
            IBuyer buyer = OperationContext.Current.GetCallbackChannel<IBuyer>();

            XmlDocument orderConfirmation = new XmlDocument();
            orderConfirmation.LoadXml("<OrderConfirmation>...</OrderConfirmation>");
            buyer.HandlePurchaseOrderConfirmation(process, orderConfirmation);

            XmlDocument invoice = new XmlDocument();
            invoice.LoadXml("<Invoice>...</Invoice>");
            buyer.HandleInvoice(process, invoice);
        }

        public void HandlePaymentNotification(PurchaseProcessHeader process, XmlNode data)
        {
            Console.WriteLine("Seller: Payment Notification Received\n\t{0}\n\tInstance {1}",
                              data.OuterXml, instanceId);
            IBuyer buyer = OperationContext.Current.GetCallbackChannel<IBuyer>();

            XmlDocument shippingNotification = new XmlDocument();
            shippingNotification.LoadXml("<Shipped>...</Shipped>");
            buyer.HandleShippingNotification(process, shippingNotification);
        }

        public void HandleShippingConfirmation(PurchaseProcessHeader process, XmlNode data)
        {
            Console.WriteLine("Seller: Shipping Confirmation Received\n\t{0}\n\tInstance {1}",
                              data.OuterXml, instanceId);
        }
    }

    class Seller
    {
        ServiceHost<SellerService> serviceHost;

        public void Open()
        {
            serviceHost = new ServiceHost<SellerService>();
            serviceHost.Open();
        }

        public void Close()
        {
            serviceHost.Close();
        }
    }
}

The buyer-side’s service implementation looks almost identical. The one significant difference here is that the buyer is (in the self-hosted scenario I have here: must be) a singleton within the scope of the conversation. That means that the initiator of the conversation (what we usually call “client”) will have to create a service instance and hand that down into the infrastructure. Because I want to know when the conversation is over and can shut down my test program, I hand a ManualResetEvent to the service instance and have it Set it to signaled whenever the buyer’s last expected message in the purchasing process arrives (shipping notification). Otherwise the service implementation doesn’t have any more surprises.

More interesting is the InitiatePurchase method. It predictably creates a service host instance for the buyer service and a channel factory that we need to send the first message (purchase order) to the seller. From there onwards, things are a little different than in the previous examples.

As the next step, I create a “service site”, which acts as the manager for the duplex conversation we’re setting up. The ServiceSite is initialized with the service host and a newly created service instance. As I indicated in the previous paragraph, that instance is a singleton for the conversation; it’s not a singleton per-se.

Using the service site as an argument, I can now create a duplex channel with a call to CreateDuplexChannel on the channel factory. The resulting channel is set up to do everything necessary to listen for answers in the scope of the conversation and to relay the required “send answers here” info to the other side. If you look at the SOAP message above, you’ll see how that back reference is flowing using a WS-Addressing wsa:From header, which is a reasonable thing to do as per WS-Addressing (see: 3. / [reply endpoint] paragraph).

Once I have the channel in hands, I create the custom header instance and a purchase order document (well…) and send it off to the seller side. Once that’s done, I hang out and wait until the conversation is over and subsequently shut down.

Using System;
using System.Xml;
using System.ServiceModel;
using System.Threading;

namespace DuplexMessagingConversation
{
    class BuyerService : IBuyer
    {
        Guid instanceId = Guid.NewGuid();
        ManualResetEvent waitHandle;

        public BuyerService(ManualResetEvent waitHandle)
        {
            this.waitHandle = waitHandle;
        }

        public void HandlePurchaseOrderConfirmation(PurchaseProcessHeader process, XmlNode data)
        {
            Console.WriteLine("Buyer: Purchase Order Confirmation Received\n\t{0}\n\tInstance {1}",
                               data.OuterXml, instanceId);
            return;
        }

        public void HandleInvoice(PurchaseProcessHeader process, XmlNode data)
        {
            Console.WriteLine("Buyer: Invoice Received\n\t{0}\n\tInstance {1}",
                               data.OuterXml, instanceId);
            ISeller seller = OperationContext.Current.GetCallbackChannel<ISeller>();

            XmlDocument paymentNotification = new XmlDocument();
            paymentNotification.LoadXml("<Payment>...</Payment>");
            seller.HandlePaymentNotification(process, paymentNotification);
        }

        public void HandleShippingNotification(PurchaseProcessHeader process, XmlNode data)
        {
            Console.WriteLine("Buyer: Shipping Notification Received\n\t{0}\n\tInstance {1}",
                               data.OuterXml, instanceId);
            ISeller seller = OperationContext.Current.GetCallbackChannel<ISeller>();

            XmlDocument shippingConfirmation = new XmlDocument();
            shippingConfirmation.LoadXml("<ShipmentReceived>...</ShipmentReceived>");
            seller.HandleShippingConfirmation(process, shippingConfirmation);
            waitHandle.Set();
        }
    }

    class Buyer
    {
        public void InitiatePurchase()
        {
            ServiceHost<BuyerService> buyerHost = new ServiceHost<BuyerService>();
            using (ChannelFactory<ISeller> channelFactory = new ChannelFactory<ISeller>("clientChannel"))
            {
                ManualResetEvent conversationDone = new ManualResetEvent(false);
                using (ServiceSite replyTarget = new ServiceSite(buyerHost, new BuyerService(conversationDone)))
                {
                    ISeller channel = channelFactory.CreateDuplexChannel(replyTarget);

                    PurchaseProcessHeader header = new PurchaseProcessHeader();
                    header.OrderIdentifier = "1234567890";

                    XmlDocument purchaseOrderDocument = new XmlDocument();
                    purchaseOrderDocument.LoadXml("<Order>...</Order>");
                    channel.HandlePurchaseOrder(header, purchaseOrderDocument);

                    conversationDone.WaitOne();
                    replyTarget.Close();
                }
                channelFactory.Close();
            }
            buyerHost.Close();
        }
   }
}

The Program is simple and predictable; I am just posting it for completeness and because I renamed the classes.

using System;

namespace DuplexMessagingConversation
{
    class Program
    {
        static void Main(string[] args)
        {
            Seller server = new Seller();
            server.Open();

            Buyer client = new Buyer();
            client.InitiatePurchase();

            Console.WriteLine("Press ENTER to quit");
            Console.ReadLine();
            server.Close();
        }
    }
}

The configuration file that goes with this example is of course a bit different from the previous ones. The <client> section and the buyerClientBinding binding configuration apply to the buyer side, and the <services> section and the sellerBinding are for the seller side. These sections would be respectively split across two configuration files, if we would host the sample in two processes.

Of course, the buyer’s <client>/<endpoint> definition for the channel refers to the buyerClientBinding. That binding defines three required binding elements: <reliableSession> configures the channel to use a reliable messaging session with default values, <compositeDuplex/> enables duplex support and <tcpTransport/> selects the TCP transport. The order of these elements is significant and defines how these “behaviors” are stacked in the channel. Quite special is the clientBaseAddress attribute of the <compositeDuplex/> element; this value is used as the base URI to dynamically construct the endpoint on which replies shall be received by the buyer instance for this conversation. The result of that composition can be seen in the wsa:From element in the SOAP message above.

The seller-side configuration for the <service> and its <endpoint> is largely equivalent to what I’ve explained in the previous examples. The only real difference is that the sellerBinding binding now also defines the required binding elements and behaviors I just pointed out.

<?xml version="1.0" encoding="utf-8" ?>
<
configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
    <
system.serviceModel>
        <
bindings>
            <
customBinding>
                <
binding configurationName="sellerBinding">
                    <
reliableSession/>
                    <
compositeDuplex/>
                    <
tcpTransport/>
                </
binding>
                <
binding configurationName="buyerClientBinding">
                    <
reliableSession/>
                    <
compositeDuplex clientBaseAddress="net.tcp://localhost/buyer/reply"/>
                    <
tcpTransport/>
                </
binding>
            </
customBinding>
        </
bindings>
        <
client>
            <
endpoint address="net.tcp://localhost/seller"
                      
bindingConfiguration="buyerClientBinding"
                      bindingType="customBinding"
                     
configurationName="clientChannel"
                     
contractType="DuplexMessagingConversation.ISeller, DuplexMessagingConversation"/>
        </
client>
        <
services>
            <
service serviceType="DuplexMessagingConversation.SellerService, DuplexMessagingConversation">
                <
endpoint contractType="DuplexMessagingConversation.ISeller, DuplexMessagingConversation"
                     address="net.tcp://localhost/seller"
                     bindingType="customBinding"
                     bindingConfiguration="sellerBinding" />
            </
service>
        </
services>
    </
system.serviceModel>
</
configuration>

And, lastly, here’s the output:

Seller: Purchase Order Received

        <Order xmlns="">...</Order>

        Instance eb628fce-ac56-43af-9326-5bfc62a101dc

Buyer: Purchase Order Confirmation Received

        <OrderConfirmation xmlns="">...</OrderConfirmation>

        Instance c1ce0c0f-fb98-4432-86fb-c81ac7243295

Buyer: Invoice Received

        <Invoice xmlns="">...</Invoice>

        Instance c1ce0c0f-fb98-4432-86fb-c81ac7243295

Seller: Payment Notification Received

        <Payment xmlns="">...</Payment>

        Instance eb628fce-ac56-43af-9326-5bfc62a101dc

Buyer: Shipping Notification Received

        <Shipped xmlns="">...</Shipped>

        Instance c1ce0c0f-fb98-4432-86fb-c81ac7243295

Seller: Shipping Confirmation Received

        <ShipmentReceived xmlns="">...</ShipmentReceived>

        Instance eb628fce-ac56-43af-9326-5bfc62a101dc

Press ENTER to quit

Again, the messages are free form XML, so I am using Indigo strictly as a raw messaging platform. It’s just a bit more powerful. ;-) If I’d show you a functionally equivalent application based on System.Messaging and MSMQ, you wouldn’t be done reading, yet.  

Categories: Indigo

[You should read Part 1 of this little series before you proceed reading this one.]

In this 2nd part I am extending the simple messaging example of Part 1 by adding some explicit WS-Addressing trickery. Addressing is so fundamental that its properties are baked right into the Headers collection of the Indigo Message. Even though there are (and I will eventually show) much easier ways to do request/reply management that hide most of what I am doing here very conveniently under the covers, I’ll give you an example of how you can send messages to a service and then instruct the service to explicitly reply back to an endpoint you provide. To make it a little more fun, I am setting up two alternate reply endpoints and have the service come back to them in turns. The Program host class is identical to the one the previous example, so I’ll show only client and service code along with the config.

The server-side code below grew a little bit as you can see. Now, there is a IGenericReplyChannel that is the contract for the replies. It looks suspiciously like the client-side’s IGenericMessageChannel and it is indeed a copy/paste clone of it. I just didn’t want to share code between client and server side. The Receive method has changed insofar as that it no longer prints the message to the console, but now creates a reply and sends the reply to the endpoint that the client indicates through the (WS-Adressing-) ReplyTo header of the incoming message.

To do this, the service constructs a ChannelFactory< IGenericReplyChannel>, using the endpoint address indicated in the incoming message’s ReplyTo header and getting the binding information from the “replyChannel” client setting in the config file shown further down. (Note that this is a bit simplistic, because it assumes that the ReplyTo EPR uses a compatible binding. There is a brilliant way to fix this, but … later). Then, the message body of the incoming message is read into an XmlDocument and if this was a real application, it would likely do something here. For now, we just leave the content as it is and punt it back out.

To construct the reply message, I don’t use the CreateReplyMessage() method provided on the Message class, simply because it doesn’t have an appropriate overload to deal with an XmlReader in the same way as Message.CreateMessage() does. I am sure that’s a minor oversight that’s just a problem with my particular Indigo build. Creating a reply is quite simple, though. All I need to do is to copy the incoming message’s MessageID value into the RelatesTo.Reply property of the outgoing message. For simplicity, I don’t check whether that header is present and set, which I really should do, because there is no actual contract or policy in place (for now). Once I have the reply constructed, just copying the incoming body into it, I send it out through a channel (“proxy”) constructed by channel factory.

using System;
using System.Xml;
using System.ServiceModel;
using System.Runtime.Serialization;

namespace SimpleAddressing
{
    [ServiceContract]
    interface IGenericMessageEndpoint
    {
        [OperationContract(IsOneWay = true, Action = "*")]
        void Receive(Message msg);
    }

    [ServiceContract]
    interface IGenericReplyChannel
    {
        [OperationContract(IsOneWay = true, Action = "*")]
        void Send(Message msg);
    }

    class GenericMessageEndpoint : IGenericMessageEndpoint
    {
        public void Receive(Message msg)
        {
            using (ChannelFactory<IGenericReplyChannel> channelFactory =
                new ChannelFactory<IGenericReplyChannel>(msg.Headers.ReplyTo, "replyChannel"))
            {
                XmlDocument doc = new XmlDocument();
                doc.Load(msg.GetBodyReader());

                // There is a msg.CreateReplyMessage(...), but that is missing the XmlReader ctor overload
                using (Message reply = Message.CreateMessage("urn:some-action-reply", new XmlNodeReader(doc)))
                {
                    reply.Headers.RelatesTo.Reply = msg.Headers.MessageID;
                    IGenericReplyChannel replyChannel = channelFactory.CreateChannel();
                    replyChannel.Send(reply);
                }
                channelFactory.Close();
            }
        }
    }

    class Server
    {
        ServiceHost<GenericMessageEndpoint> serviceHost;

        public void Open()
        {
            serviceHost = new ServiceHost<GenericMessageEndpoint>();
            serviceHost.Open();
        }

        public void Close()
        {
            serviceHost.Close();
        }
    }
}

Having a reply-enabled server-side, we can now get to the juicy part: the client. Since we now need to listen for replies, the client has to expose a reply-endpoint and therefore also act as a server. (That is the reason why “endpoint” is preferred in service-land rather than the “client”/”server” nomenclature). Therefore, I define a IGenericReplyEndpoint contract (no surprises there) and implement that in GenericReplyEndpoint. To make the example a bit more fun, the constructor of that service class takes two arguments: The client argument refers to an instance of the Client application class and epName gives the service instance (!) a name. The client reference is used to let the client application know how many messages were already received so that it can shut down, once the expected replies for all sent messages have come back. The notification about received messages is done inside the ReceiveReply method,  which otherwise just writes the message body to the console.

Unlike the previous example, this service implementation isn’t used directly. Instead, I derive two subclasses from it: ReplyEndpointA and ReplyEndpointB. These two classes each implement a constructor that passes “A” and “B”, respectively, for the epName argument to the base-class and pass-through the client argument. In case you wonder how the ServiceHost could possibly construct instances of these service classes, not knowing the appropriate parameters to pass to them: Instances of these two classes are pre-constructed and fed into the service host as singletons as you will see below.

using System;
using System.Xml;
using System.ServiceModel;
using System.Threading;

namespace SimpleAddressing
{
    [ServiceContract]
    interface IGenericMessageChannel
    {
        [OperationContract(IsOneWay = true, Action = "*")]
        void Send(Message msg);
    }

    [ServiceContract]
    interface IGenericReplyEndpoint
    {
        [OperationContract(IsOneWay = true, Action = "*")]
        void ReceiveReply(Message msg);
    }

    class GenericReplyEndpoint : IGenericReplyEndpoint
    {
        Client client;
        string epName;

        public GenericReplyEndpoint(Client client, string epName)
        {
            epName = epName;
            client = client;
        }

        public void ReceiveReply(Message msg)
        {
            XmlDictionaryReader xdr = msg.GetBodyReader();
            Console.Write("{0}: ", epName);
            Console.WriteLine(xdr.ReadOuterXml());
            client.MessageReceived();
        }
    }

    class ReplyEndpointA : GenericReplyEndpoint
    {
        public ReplyEndpointA(Client client):base(client, "A")
        {
        }
    }

    class ReplyEndpointB : GenericReplyEndpoint
    {
        public ReplyEndpointB(Client client)
            : base(client, "B")
        {
        }
    }

    … continued below …

The Client application class is a bit more intricate than the previous version, but there is no rocket science there. I have a counter for the number of messages received and a ManualResetEvent that is getting signaled whenever the number of received messages matches (or exceeds) the number of sent messages. That happens in the MessageReceived method, which is called by the service singletons. The class also has a UniqueIDGenerator, which is an Indigo-supplied class that lets me generate values for the MessageID header that is required alongside using ReplyTo.

In the SendLoop method, I now create two service host instances that shall receive the replies to messages I send; one of type ServiceHost<ReplyEndpointA> and one of type ServiceHost<ReplyEndpointB>. Each of these hosts receives an instance of its service type as a construction argument. Doing so causes the service host to operate in singleton mode, meaning that it will not create new service instances out of and by itself, but rather use only the exact instance supplied here. In the actual send loop, I alternate (i%2==0) between those two service hosts and invoke SendMessage passing the channel factory (not the channel as in the previous example) and the chosen ServiceHost instance.

In SendMessage, I do a few simple things and only one not-so-obvious thing. A new message is constructed as the first step and loaded with an action and the body content. Then I grab the destination address from the channel factory, which sits in the channel factory’s Description.Endpoint.Address property and assign that to the message’s To header. The MessageID is set to a new unique identifier created using the messageIdGenerator. All that is pretty straightforward. Not immediately clear might be what I am doing with the ReplyTo header:

Once a service host is Open, it’s bound to set of endpoints and is actively listening on those endpoints using “endpoint listeners”. I am writing “set of endpoints”, because a service might have several. Each service can expose as many endpoints as it likes; each with a separate binding (transport/behavior/address) and each with a separate contract. There are puzzling special cases, of which you’ll see at least one in this series, where a service listens and properly responds to a contract type that is nowhere to be seen on the actual service implementation. The active endpoints sit on the EndpointListeners collection.

For simplicity (again, this is a bit naïve, but serves the purpose for the time being) and to obtain a ReplyTo address to pass to the service I am sending the message to, I reach into that collection and grab the first available endpoint listener’s address. What I should be doing here is to check whether that listener is indeed the one for the IGenericReplyEndpoint contract and whether I can find one with a binding that is mostly compatible with the one the outbound channel uses. The latter selection would be done to make sure that if I send out via “net.tcp” and I expose a “net.tcp” endpoint myself, I would preferably pass that endpoint instead of a possible “http” endpoint I might be listening on at the same time. Once ReplyTo is set, I send the message out.   

    … continuation from above …

    class Client
    {
        const int numMessages = 15;
        int messagesReceived;
        ManualResetEvent allReceived;
        UniqueIDGenerator messageIdGenerator;
        XmlDocument contentDocument;

        public Client()
        {
            messagesReceived = 0;
            allReceived = new ManualResetEvent(false);
            messageIdGenerator = new UniqueIDGenerator();
            contentDocument = new XmlDocument();
            contentDocument.LoadXml("<rose>is a</rose>");
        }

        void SendMessage(ChannelFactory<IGenericMessageChannel> channelFactory,
                         ServiceHost replyService)
        {
            XmlNodeReader content = new XmlNodeReader( contentDocument.DocumentElement);
            using (Message msg = Message.CreateMessage("urn:some-action", content))
            {
                msg.Headers.To = channelFactory.Description.Endpoint.Address;
                msg.Headers.MessageID = messageIdGenerator.Next();
                msg.Headers.ReplyTo = replyService.EndpointListeners[0].GetEndpointAddress();
                IGenericMessageChannel channel = channelFactory.CreateChannel();
                channel.Send(msg);
            }
        }

        public void SendLoop()
        {
            ServiceHost<ReplyEndpointA> replyServiceA = new ServiceHost<ReplyEndpointA>(new ReplyEndpointA(this));
            replyServiceA.Open();
            ServiceHost<ReplyEndpointB> replyServiceB = new ServiceHost<ReplyEndpointB>(new ReplyEndpointB(this));
            replyServiceB.Open();

            using (ChannelFactory<IGenericMessageChannel> channelFactory =
                        new ChannelFactory<IGenericMessageChannel>("clientChannel"))
            {
                channelFactory.Open();
               
                for (int i = 0; i < numMessages; i++)
                {
                    if (i % 2 == 0)
                    {
                        SendMessage(channelFactory, replyServiceB);
                    }
                    else
                    {
                        SendMessage(channelFactory, replyServiceA);
                    }
                }
                channelFactory.Close();
            }
            allReceived.WaitOne();
            replyServiceA.Close();
            replyServiceB.Close();
        }

        public void MessageReceived()
        {
            if (++ messagesReceived >= numMessages)
            {
                allReceived.Set();
            }
        }
    }
}

What’s left is the matching configuration for this. The mechanics of how the configuration maps to the classes and instances are largely the same as in the simple messaging example. A small difference is that the replyChannel client configuration has no target address attribute, because that one is always supplied via ReplyTo (refer to the GenericMessageEndpoint’s Receive method above to see how that is wired up). Oh, yes, and I switched it all to http transport in case you don’t notice. TCP would work just as well, but I felt like I needed a little change. The assumed assembly name for this sample is “SimpleAddressing”, of course.

<?xml version="1.0" encoding="utf-8" ?>
<
configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
    <
system.serviceModel>
        <
bindings>
            <
customBinding>
                <
binding configurationName="defaultBinding">
                    <
httpTransport/>
                </
binding>
            </
customBinding>
        </
bindings>
        <
client>
            <
endpoint address="http://localhost/genericep"
               
bindingConfiguration="defaultBinding"
                bindingType="customBinding"
               
configurationName="clientChannel"
               
contractType="SimpleAddressing.IGenericMessageChannel, SimpleAddressing"/>
            <
endpoint
                
bindingConfiguration="defaultBinding"
                bindingType="customBinding"
               
configurationName="replyChannel"
               
contractType="SimpleAddressing.IGenericReplyChannel, SimpleAddressing"/>
        </
client>
        <
services>
            <
service serviceType="SimpleAddressing.GenericMessageEndpoint, SimpleAddressing">
                <
endpoint contractType="SimpleAddressing.IGenericMessageEndpoint, SimpleAddressing"
                                    address="http://localhost/genericep"
                                    bindingType="customBinding"
                                    bindingConfiguration="defaultBinding" />
            </
service>
            <
service serviceType="SimpleAddressing.ReplyEndpointA, SimpleAddressing">
                <
endpoint contractType="SimpleAddressing.IGenericReplyEndpoint, SimpleAddressing"
                                    address="http://localhost/genericreplyA"
                                    bindingType="customBinding"
                                    bindingConfiguration="defaultBinding" />
            </
service>
            <
service serviceType="SimpleAddressing.ReplyEndpointB, SimpleAddressing">
                <
endpoint contractType="SimpleAddressing.IGenericReplyEndpoint, SimpleAddressing"
                                    address="http://localhost/genericreplyB"
                                    bindingType="customBinding"
                                    bindingConfiguration="defaultBinding" />
            </
service>
        </
services>
    </
system.serviceModel>
</
configuration>

The output of the sample is predictable, isn’t it? The replies come back in sequence, alternating between the two reply services “A” and “B”.

B: <rose>is a</rose>
A: <rose>is a</rose>
B: <rose>is a</rose>
A: <rose>is a</rose>
B: <rose>is a</rose>
A: <rose>is a</rose>
B: <rose>is a</rose>
A: <rose>is a</rose>
B: <rose>is a</rose>
A: <rose>is a</rose>
B: <rose>is a</rose>
A: <rose>is a</rose>
B: <rose>is a</rose>
A: <rose>is a</rose>
B: <rose>is a</rose>
Press ENTER to quit

 

Categories: Indigo

This weekend I will create some samples for myself as a foundation for learning and fiddling around with Indigo bindings. A “binding” is a combination of transport and behavior settings that binds a service contract to an endpoint and it is a conceptual and functional superset of what wsdl:binding does. One of the great things about Indigo is that changing bindings and therefore adding/removing capabilities and even adding/exchanging/removing transports can be done with no impact on the code itself. All of that can be done in configuration. So what I will do is to share the base samples with you as I write them, explain a couple of concepts along the way, and use some very simplistic bindings for starters. Once I have figured out how the bindings stuff works (Citing one of the Indigettes at Microsoft: “That’s the part of the product that I’m afraid will be rocket science”), I can later reference these samples and show using configuration snippets what behaviors (e.g. transactions, security, reliable messaging) can be used in combination with which transports and contract types. So, watch this space, you can expect some code here.

What I’ll start with is the most simplistic and “raw” way to use Indigo that’s practical for someone with a life. The extensibility model will let you reach even deeper down into the guts, but I don’t want to drag you down there too far. Also, I don’t really know all of the scary dragons lurking there is the dark. I also want to start with this example to dispel some people’s impression that Indigo is just another “square brackets RPC-ish thing”.

The snippet below is likely the simplest possible Indigo service.  I define an IGenericMessageEndpoint contract that has a single operation Receive, which expects an System.ServiceModel.Message as input. The Message class is the immediate representation of a message that the entire Indigo infrastructure uses internally and that can be surfaced to the application using a contract definition like this. The [OperationContract] attribute signals that the operation is “one way”, so it’s clear that we’re not sending any immediate responses and not even faults. The Action is set to “*”, which is a wildcard indicator specifying that all messages, irrespective of their Action URI will de dispatched here. That is, unless there would be another operation without a wildcard Action. In that case, all messages matching the concrete Action URI of that operation would be dispatched there and all other messages would flow into the wildcard operation.

The implementation of the contract in GenericMessageEndpoint just dumps the content of the message body onto the console by acquiring the XmlDictionaryReader of the message and writing the string’ized body content out.

The Server class constructs a ServiceHost<GenericMessageEndpoint> for the service implementation, which constructs endpoints from configuration settings, hosts these endpoints, and is responsible for creating instances of the service as messages arrive and need to be dispatched. As you can see, I do nothing more than constructing the host and Open it. The specifics of what transport is used and where the service is listening will be supplied in config, as you’ll see further down.

using System;
using System.Xml;
using System.ServiceModel;
using System.Runtime.Serialization;

namespace SimpleMessaging
{
    [ServiceContract]
    interface IGenericMessageEndpoint
    {
        [OperationContract(IsOneWay = true, Action = "*")]
        void Receive(Message msg);
    }

    class GenericMessageEndpoint : IGenericMessageEndpoint
    {
        public void Receive(Message msg)
        {
            XmlDictionaryReader xdr = msg.GetBodyReader();
            Console.WriteLine(xdr.ReadOuterXml());
        }
    }

    class Server
    {
        ServiceHost<GenericMessageEndpoint> serviceHost;

        public void Open()
        {
            serviceHost = new ServiceHost<GenericMessageEndpoint>();
            serviceHost.Open();
        }

        public void Close()
        {
            serviceHost.Close();
        }
    }
}

Below is the matching “raw” client. What we want to do here is to just construct a System.ServiceModel.Message, put some XML into it and throw it over the fence. To do that, I construct a client side contract IGenericMessageChannel (I am doing that to show that we’re really in “contract-free” raw messaging territory here) that has a Send operation, which “looks right” on the sender side vs. the receiver contract’s Receive, and also flags the operation as one-way and with a wildcard Action.

To setup a channel to the destination service, I can now (in SendLoop) construct a ChannelFactory<IGenericMessageChannel> over that contract and with the argument “clientChannel”, which is a reference into the configuration as I’ll show in a little bit. The channel factory is the client-side counterpart of the service host. It reads all information about the channel from the configuration, evaluates the bindings, binds to the right transports and behaviors, and also knows about the endpoint to talk to. Once I have a channel factory, I can Open it and have it give me a channel (or “proxy”) that I can talk through. In SendMessage I cook up a Message from an Action URI that I make up and an XmlReader instance layered over an XmlDocument that I keep around and send that out to the service.

using System;
using System.Xml;
using System.ServiceModel;

namespace SimpleMessaging
{
    [ServiceContract]
    interface IGenericMessageChannel
    {
        [OperationContract(IsOneWay = true, Action = "*")]
        void Send(Message msg);
    }

    class Client
    {
        XmlDocument contentDocument;

        public Client()
        {
            contentDocument = new XmlDocument();
            contentDocument.LoadXml("<rose>is a</rose>");
        }

        void SendMessage(IGenericMessageChannel channel)
        {
            XmlNodeReader content = new XmlNodeReader( contentDocument.DocumentElement);
            using (Message msg = Message.CreateMessage("urn:some-action", content))
            {
                channel.Send(msg);
            }
        }

        public void SendLoop()
        {
            using (ChannelFactory<IGenericMessageChannel> channelFactory =
                        new ChannelFactory<IGenericMessageChannel>("clientChannel"))
            {
                channelFactory.Open();
                IGenericMessageChannel channel = channelFactory.CreateChannel();
                for (int i = 0; i < 15; i++)
                {
                    SendMessage(channel);
                }
                channelFactory.Close();
            }
        }
    }
}

The surrounding application for Client and Server (I run both in the same process for simplicity) is, of course, trivial. All I do is to construct and start the server, construct a client and call its send loop and then wait for the user to be amazed of the (server’s) console output and have him/her press ENTER to quit. If I were making this more elaborate, I could wait until all sent messages had arrived at the service side and shut down automatically, but this is supposed to be simple.

using System;

namespace SimpleMessaging
{
    class Program
    {
        static void Main(string[] args)
        {
            Server server = new Server();
            server.Open();

            Client client = new Client();
            client.SendLoop();

            Console.WriteLine("Press ENTER to quit");
            Console.ReadLine();
            server.Close();
        }
    }
}

That’s as much code as we need to implement a one-way messaging client/server “system” that can throw XML snippets across a network transport.

To make it work, we need to configure this application and “deploy” it to a concrete environment. A simple configuration (assuming this is all compiled into “SimpleMessaging.exe” and hence the assembly name is “SimpleMessaging”) could look like the one shown below.

The <bindings> section contains one <customBinding> (means: I am not anything predefined), with a concrete configuration named “defaultBinding” that uses the tcpTransport. If I were setting up security or reliable messaging, would also be doing that here and add the respective config elements alongside the TCP transport binding element, but we will keep it simple for the time being.

The <client> section defines, for the configurationName=”clientChannel” (look above in the client snippet how that maps to the ChannelFactory< IGenericMessageChannel > constructor call), which binding should be used. The example links up to the customBinding type and within that type to the “defaultBinding” config. Furthermore, the section defines to which contract type ([ServiceContract]-labeled interface or class) the endpoint is bound and, lastly and most importantly, the address at which the endpoint is listening to client messages.

The <service> section defines the server side of the story. The association between the service host and the configuration is done via the serviceType attribute. When the ServiceHost<GenericMessageEndpoint> is contructed in the server snippet above, the service host locates the section for the respective service type it is hosting by looking at this attribute. The endpoint definition on the server side is very similar to the client side, which should not be very surprising. It also refers to a binding using bindingType/bindingConfiguration, defines the address at which the service will be listening, and indicates which contract type applies for the endpoint.

<?xml version="1.0" encoding="utf-8" ?>
<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
    <system.serviceModel>
        <bindings>
            <customBinding>
                <binding configurationName="defaultBinding">
                    <tcpTransport/>
                </binding>
            </customBinding>
        </bindings>
        <client>
            <endpoint address="net.tcp://localhost/genericep"
                        bindingConfiguration="defaultBinding"
                bindingType="customBinding"
                configurationName="clientChannel"
                        contractType="SimpleMessaging.IGenericMessageChannel, SimpleMessaging"/>
        </client>
        <services>
            <service serviceType="SimpleMessaging.GenericMessageEndpoint, SimpleMessaging">
                <endpoint contractType="SimpleMessaging.IGenericMessageEndpoint, SimpleMessaging"
                                    address="net.tcp://localhost/genericep"
                                    bindingType="customBinding"
                                    bindingConfiguration="defaultBinding" />
            </service>
        </services>
    </system.serviceModel>
</configuration>

Running it all yields the following output, spit out by the server side, and just as expected:

<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
<rose>is a</rose>
Press ENTER to quit

Very simple and versatile one-way messaging flowing free-form XML. Not at all RPC-ish.

 

Categories: Indigo

February 11, 2005
@ 06:48 AM

Aaron Skonnard says I am clearly wrong with my demand that one shouldn’t have to look at WSDL/XSD/Policy. Well, at this point in time the tooling makes it indeed difficult to ignore angle brackets. But that’s not a reason to give up. I also find the “it all has to start at the angle bracket” stance overly idealistic.

I can type up an XML Schema in notepad, I can even type up a WSDL in notepad. As much as one would like to have it different, both “skills” are not so common amongst the developer population. I would think that for the majority of ASP.NET Web Services in production today, their developers completely ignored the XSD/WSDL details. But even if that were different: The rubber hits the road when we talk about policy. Can you type up a complete and consistent set of policy assertions for integrity and confidentiality and authentication using Kerberos and vX509 tokens without looking at the spec or a cheat sheet? How about combining that with assertions for WS-AT and WS-RM? As long as we keep the story reduced to XSD and WSDL, dealing with angle brackets might something that someone could reasonably expect from a mortal programmer who has a life. One we take policy into the picture, we better start asking for tools that hide all those details. The interoperability problems of getting secure, reliable and transacted web service work together are far harder than just getting services to talk. That’s part of the contract story, too. Yet, I cannot imagine that anybody would seriously demand that we all sit down and explicitly write these endless sequences of policy assertions and then feed our tools with them. At least I don’t want to do that, but that may just be me getting too old for this stuff.

Categories: Indigo

February 10, 2005
@ 10:55 PM

Bruce Williams illustrates how to turn my very simple “Hello World” Indigo sample into a queued service by changing the transport binding from HTTP to MSMQ (I think that’s radically cool). Now, the next step is to illustrate a Duplex conversation to get the response back to the caller. If Bruce or someone else isn’t going to beat me to it, I’ll show that once I get home from Warsaw tomorrow night. [Ah, by the way: Bruce! No need to “Mr.” me ;-)]

Categories: Indigo

February 10, 2005
@ 06:45 AM

Tim Ewald responds (along with a few others) to my previous post about WSDL and states: ”Remember that WSDL/XSD/Policy is the contract, period. Any other view of your contract is just an illusion.

WSDL and XSD and Policy are interoperable metadata exchange formats. That’s just about it. The metadata that’s contained in artifacts compliant with these standards can be expressed in a multitude of different ways. I do care about “my tool” (whatever that is) to do the right thing mapping from and to these metadata standards whenever required and I do care about “my tool” guiding me to stay within the limits of what these metadata formats can express.

But WSDL/XSD/Policy isn’t the contract. If you do ASMX, you can create server and client without you or any of the tools ever looking at or generating WSDL. And it works. If you use Indigo, you can do the same and, in fact, for generating any XML-based metadata from within an Indigo service, it’s even required to explicitly add the respective service behavior at present. The required metadata to make services work comes in many shapes or forms and is, for a given tool, typically richer than what you will find in the related WSDL/XSD/Policy, because not all that metadata is related to the wire format itself.

If I need to tell someone who is not using my tool of choice how to talk to my service, I have my tool generate the respective metadata exchange documents and I want to be able to trust my tool that they’re “right”.

What I am stating here is simply my demand and expectation for the degree of “automatic interoperability” that I expect from the tools. I can read WSDL/XSD/Policy; out there, most people absolutely don’t seem to care about these details and I tend to agree with them that making this stuff work is someone else’s problem.

I don’t need to be able to read and write PDF to use PDF. I use PDF if I know that someone will open my document who is not using Microsoft Word. Still, that PDF doc isn’t the document. My Word source document is the document I edit and revise. The PDF is just one of several possible representations of its contents.

Categories: Indigo

XML is ugly and angle brackets are for plumbers. Unless you have a good reason to do so, you shouldn’t have to look at WSDL. Sharing this C# snippet here

[ServiceContract]
interface IHello
{
      [OperationContract]
      string SayHello(string name);
}

is a perfectly reasonable way to share contract between server and client, if you’ll be sticking to Indigo. A service can expose all the WS-MetadataExchange and XSD and WSDL you like so that other Web Service clients can bind to your service, but as long as you stay on the System.ServiceModel level and focus on writing a distributed systems solution instead of writing something that “does XML”, you won’t have to worry about all the goo that goes on in the basement. Staring at WSDL is about as interesting as looking at the output of “midl /Oicf”.

Categories: Indigo

February 9, 2005
@ 05:56 AM

using System;
using System.ServiceModel;

namespace IndiHello
{
      [ServiceContract]
      public class Hello
      {
            [OperationContract]
            public string SayHello(string name)
            {
                  return "Hello " + name;
            }
      }

      class Program
      {
            static void Main(string[] args)
            {
                  ServiceHost<Hello> host = new ServiceHost<Hello>(new Uri("http://localhost/hello"));
                  host.AddEndpoint(typeof(Hello), new BasicProfileHttpBinding(), "ep");
                  host.Open();
                  Console.WriteLine("Press ENTER to quit");
                  Console.ReadLine();
                  host.Close();
            }
      }
}

I am told that I can talk, so I do ;-)  Here’s a simple Indigo server. If you looked at the PDC 2003 Indigo bits, you will notice that the programming model changed quite a bit. I think that in fact, every single element of the programming model changed since then. And all for the better. The programming model is so intuitive by now that I am (almost) tempted to say “Alright, understood, next technology, please”.

So up there you have a class with an implicit service contract. An explicit service contract would be a standalone interface (that’s the proper way to do it, but I wanted to keep the first sample simple) with a [ServiceContract] attribute. Here, [ServiceContract] sits right on the class. Note that the class doesn’t derive from any special base class. Each method that you want to expose as an endpoint operation is labeled with [OperationContract]. These and a set of other attributes (along with a bunch of options you could set, but which I am not doing for the moment) control how the class contract is exposed to the outside world via Indigo.

In the Main method, you have a ServiceHost, which hosts the service (the class is parameterized with the implementation type) and which is initialized with the base-adress at which the service shall be hosted. The base address here is “http://localhost/hello” and with that maps into the namespace of http.sys at port 80. The endpoint can exist alongside any IIS-hosted websites, even though this particular app is hosted in its own little console-based app.

Into this host, I map the service contract with a BasicProfileHttpBinding() to the endpoint address “ep”, which means that messages to that particular service that flow through HTTP using the WS-I Basic Profile 1.0 shall be directed to the “http://localhost/hello/ep” endpoint. Once I have a binding in place (that could also be done in config), I Open() the service and the service listens. Once I am done listening, I Close() the service.

Isn’t too hard.

Categories: Indigo

We've built FABRIQ, we've built Proseware. We have written seminar series about Web Services Best Practices and Service Orientation for Microsoft Europe. I speak about services and aspects of services at conferences around the world. And at all events where I talk about Services, I keep hearing the same question: "Enough of the theory, how do I do it?"

Therefore we have announced a seminar/workshop around designing and building service oriented systems that puts together all the things we've found out in the past years about how services can be built today and on today's Microsoft technology stack and how your systems can be designed for with migration to the next generation Microsoft technlogy stack in mind. Together with our newtelligence Associates, we are offering this workshop for in-house delivery at client sites world-wide and are planning to announce dates and locations for central, "open for all" events soon.

If you are interested in inviting us for an event at your site, contact Bart DePetrillo, or write to sales@newtelligence.com. If you are interested in participating at a central seminar, Bart would like to hear about it (no obligations) so that we can select reasonable location(s) and date(s) that fit your needs.

Categories: Architecture | SOA | FABRIQ | Indigo | Web Services

One year ago (plus 5 days), I posted this here on my blog. I just found it again through my referral stats. Of course, that post isn't about Juliet, at all. Fun.

Categories: Indigo | Web Services

February 15, 2004
@ 08:27 PM

I am currently writing the speaker notes for a service-oriented architecture workshop that Microsoft and newtelligence will run later this year. I was just working on the definitions of components and services and I think I found a reasonably short and clear definition for it:

One of the most loaded and least well defined terms in programming is "component". Unfortunately, the same is true for "service". Especially there is confusion about the terms "component" and "services" in the context of SOA.

The term component is a development and deployment concept and refers to some form of compiled code. A component might be a JVM or CLR class, a Java bean or a COM class; in short, a component is any form of a unit of potentially reusable code that can be accessed by name, deployed and activated and can be assembled to build applications. Components are typically implemented using object-oriented programming languages and components can be used to implement services.

A service is a deployment and runtime concept. A service is strictly not a unit of code; it is rather a boundary definition that might be valid for several different concrete implementations. The service definition is deployed along with the components that implement it. The communication to and from a service is governed by data contracts and services policies. From the outside, a service is considered an autonomous unit that is solely responsible for the resource it provides access to. Services are used to compose solutions that may or may not span multiple applications.

Let me repeat the one sentence that made me go “damn, I think now I finally have the topic cornered”:

A service is strictly not a unit of code; it is rather a boundary definition that might be valid for several different concrete implementations.

Categories: Architecture | Indigo

On our 4 hour taxi ride from Portoroz in Slovenia to Zagreb in Croatia, I decided to make some significant changes to my Indigo slide deck for the tour. David Chappell called my talk an “impossible problem”, mostly because the scope of the talks we are doing is so broad, ranging from the big picture of Longhorn over Avalon and WinFS to the Whidbey innovations and I am stuck in the middle with a technology that solves problems most event attendees don’t consider to have.

So I took a rather dramatic step: I dropped almost all of the slides that explain how Indigo works. What’s left is mostly only the Service Model’s programming surface. For the eight slides I dropped, I added and modified six slides from the “Scalability” talk written by Steve Swartz and myself for last year’s “Scalable Applications Tour”, which now front the talk. Until about 20 minutes into the “new” talk, I don’t speak about Indigo, at all. And that turned out to be a really good idea.

As I’ve written before, many people who attend the events on this tour have no or little experience in writing distributed applications. In reality, the classic 2-tier client/server model where all user-code sits on one tier (let it be Windows Forms, VB6, ASP or ASP.NET) and the other tier is the database does still rule the world. And, no, the browser doesn’t count as a tier for me; it’s just a “remote display surface” for the presentation tier.

Instead of talking about features, I now talk about motivation. Using two use-case scenarios and high-level architectural overviews modeled after Hotmail and Amazon (that everybody knows) I explain the reasons for why distributing work across multiple systems is a good thing, how such systems can be separated so that each of them can scale independently and what sort of services infrastructure is needed to implement them. And it works great. Once I have the audience nodding to the obvious goodness I can continue and map the requirements to Indigo features and explain the respective aspects of the service model. The flow of the talk is much better and the attendees get more and immediate value out of it. If I weren’t so time constrained I would probably map it to Enterprise Services (now) and Indigo (future) all in the same talk and also show to do the transition. I am sure that I can do that sort of talk at some event this year.

Lesson learned: Less features, more why. With the majority of developers the challenge isn’t about showing them how distributed systems are being improved; it’s about getting them to understand and possibly adopt the idea in the first place.

Categories: Talks | EMEA Longhorn Preview | Technology | Indigo

I am in Budapest today and I am just done with my Indigo talk (you can find the slides at http://codezone.info under “Talks”), having done it for the 6th time on this tour throughout Europe. After the events Den Haag, Oslo, Copenhagen, Helsinki and Geneva, I still find Indigo a very difficult topic to talk about on this tour. It’s not about technology or because my talk doesn’t work: It’s about whether people think it’s relevant to their work.

The true challenge is to explain to the developers we meet that Indigo is going to be very important for them down the road. I find that when I talk to developers on this tour or look at their evaluation forms that very many of them apparently still write fairly compact (to avoid the word monolithic) ASP.NET applications or Windows Forms applications that use a conservative client/server approach. All presentation and logic resides in one tier and the only remote component worth mentioning is the database. That means that the majority of the folks sitting in my talks hasn’t even touched one of the existing distributed technology stacks that Indigo is set to replace.

The difficulty presenting Indigo on this tour – alongside sexy stuff like declarative UI programming with spinning Windows and Videos with alpha-blending in Avalon and googlefast cross-media searches across all of your local storage media as in WinFS – is that Indigo is about things that are hidden inside applications and do not surface to the user. Stuff that drives server-applications is sometimes hard to understand without knowing the architectural background and the motivations. (Sidenote: A while ago I heard a rumor from a usually trustworthy source that the spinning balls in the COM+ Explorer exist because COM+ was horribly hard to demo as well and the spinning balls provided a good way of visualizing that stuff was happening.)

The ideal talk for an unsuspecting audience with little knowledge in distributed systems would have to sell the whole idea of distributed systems to boot, the experiences and errors made, the reasons for why Web services are a good thing, the problems creating the motivation for and the principles of service oriented architectures, a set of some tangible application examples and use cases along with the solutions that Indigo provides; all of that in the same talk and within 75 minutes. And that in a way that developers get to see code and demos, too. That sort of talk would span about 20 years of distributed computing history. I am not sure this fits in 75 minutes. Therefore I think I will have to be happy with only a fraction of the audience being interested and/or willing to appreciate the things that I am talking about here. 

Very many folks think that the topics I am talking about are only relevant to “big apps” and have a hard time seeing the benefits of something like Indigo – much in the same way as it is with Enterprise Services or Web Services.

If you believe Don Box, who said at PDC that Indigo will ship at some point between Whidbey and Longhorn, and think about the implications of that, Indigo is in fact relevant to everyone writing applications that expose functionality to other applications in some way – now or at least quite soon. The first ship vehicle for Indigo will be, if Don’s statement holds water in its consequences, some service pack or upgrade pack for Windows Server 2003 and Windows XP. That means nothing less than the entire application infrastructure of Windows Server 2003 is getting a major upgrade probably in a year or so from now.

If you are writing applications using ASMX, Remoting or Enterprise Services today, the impact of Indigo’s arrival can be immediate if you want to make it so. If you code your applications cleverly today (following guidelines explained by Joe Long here or in my talk) and don’t play too many tricks on the infrastructure – for instance by using the Remoting extensibility points – you should have a fairly smooth upgrade path to Indigo. The goal is that upgrading code will be simple and mechanical in most cases.

Categories: EMEA Longhorn Preview | Indigo

January 24, 2004
@ 09:45 PM

Don says that BEA's Deputy CTO has missed the cluetrain. I absolutely agree with Don's opinion on this article and what's even worse than the things said is what the article implies. If that is BEA's official position, this is nothing less than an outing that they are passengers in the backseat of a car that is driven by IBM and Microsoft (switching drivers every once in a while) and that they're neither behind the spirit of the whole undertaking nor do they fully understand the specifications they have put their names on. Integration or standardization on the API level has failed miserably in countless attempts and any middleware company (including BEA) that is out there to compete on features must go beyond the least common denominator approach to win over customers. Does BEA have Indigo envy?

Categories: Technology | Indigo

The evolution of in-memory concept of messages in the managed Microsoft Web Services stack(s) is quite interesting to look at. When you compare the concepts of System.Web.Services (ASMX), Microsoft.Web.Services (WSE) and System.MessageBus (Indigo M4), you'll find that this most fundamental element has undergone some interesting changes and that the Indigo M4 incarnation of "Message" is actually a bit surprising in its design.

ASMX

In the core ASP.NET Web Services model (nicknamed ASMX), the concept of an in-memory message doesn't really surface anywhere in the programming model unless you use the ASMX extensibility mechanism. The abstract SoapMessage class, which comes in concrete SoapClientMessage and SoapServerMessage flavors has two fundamental states that depend on the message stage that the message is inspected in: The message is either unparsed or parsed (some say "cracked").

If it's parsed you can get at the parameters that are being passed to the server or are about to be returned to the client, but the original XML data stream of the message is no longer available and all headers have likewise either been mapped onto objects or lumped into a "unknown headers" array. if the message is unparsed, all you get is an text stream that you'll have to parse yourself. If you want to add, remove or modify headers while processing a message in an extension, you will have to read and parse your copy of the input stream (the message text) and write the resulting mesage to an output stream that's handed onwards to the next extension or to the infrastructure. In essence that means that if you had two or three ASMX-style SOAP extensions that implement security, addressing and routing functionality, you'd be parsing the message three times and serializing it three times just so that the infrastructure would parse it yet again. Not so good.

WSE

The Web Services Enhancements (WSE) have a simple, but very effective fix for that problem. The WSE team needed to use the ASMX extensibility point but found that if they'd build all their required extensions using the ASMX model, they'd run into that obvious performance problem. Therefore, WSE has its own pipeline and its own extensibility mechanism that plugs as one big extension into ASMX and when you write extensions (handlers) for WSE, you don't get a stream but an in-memory info-set in form of a SoapEnvelope (that is derived from System.Xml.XmlDocument and therefore a DOM). Parsing the XML text just once and have all processing steps work on a shared in-memory object-model seems optimal. Can it really get any better than "parse once" as WSE does it?

Indigo

When you look at the Indigo concept of Message (the Message class in the next milestone will be the same in spirit, similar in concept and different in detail and simpler as a result), you'll find that it doesn't contain a reference to an XmlDocument or some other DOM-like structure. The Indigo message contains a collection of headers (which in the M4 milestone also come in an "in-memory only" flavor) and a content object, which has, as its most important member, an XmlReader-typed Reader property.

When I learned about this design decision a while ago, I was a bit puzzled why that's so. It appeared clear to me that if you kept the message parsed in a DOM, you'd have a good solution if you want to hand the message down a chain of extensibility points, because you don't need to reparse. The magic sentence that woke me up was "We need to support streaming". And then it clicked.

Assume you want to receive a 1GB video stream over an Indigo TCP multicast or UDP connection (even if you think that's a silly idea - work with me here). Because Indigo will represent the message containing that video as an XML Infoset (mind that this doesn't imply that we're talking about base64-encoded content in an UTF-8 angle bracket document and therefore 2GB on the wire), we've got some problems if there was a DOM based solution. A DOM like XmlDocument is only ready for business when it has seen the end tag of its source stream. This is not so good for streams of that size, because you surely would want to see the video stream as it downloads and, if the video stream is a live broadcast, there may simply be no defined end: The message may have a virtually infinite size with the "end-tag" being expected just shortly before judgment day.

There's something philosophically interesting about a message relaying a 24*7*365 video stream where the binary content inside the message body starts with the current video broadcast bits as of the time the message is generated and then never ends. The message can indeed be treated as being well-formed XML because there is always a theoretical end to it. The end-tag just happens to be a couple of "bit-years" away.

Back to the message design: When Indigo gets its hands on a transport stream it layers a Message object over the raw bits available on the message using an XmlReader. Then it peeks into the message and parses soap:Envelope and everything inside soap:Header. The headers it finds go into the in-memory header collection. Once it sees soap:Body, Indigo stops and backs off. The result of this is a partially parsed in-memory message for which all headers are available in memory and the body of the message is left sitting in an XmlReader. When the XmlReader sits on top of a NetworkStream, we now have a construct where Indigo can already work on the message and its control information (headers) while the network socket is still open and the rest of the message is still arriving (or portions haven't even been sent by the other party).

Unless an infrastructure extension must touch the body (in-message body encryption or signature do indeed spoil the party here), Indigo can process the message, just ignore the body portion and hand it to the application endpoint for processing as-is. When the application endpoint reads the message through the XmlReader it therefore pulls the bits directly off the wire. Another variant of this, and the case where it really gets interesting, is that using this technique, arbitrary large data streams can be routed over multiple Indigo hops using virtualized WS-Addressing addressing where every intermediary server just forwards the bits to the next hop as they arrive. Combine this with publish and subscribe services and Indigo's broadcasting abilities and this is getting really sexy for all sorts of applications that need to traverse transport-level obstacles such as firewalls or where you simply can't use IP.     

For business applications, this support for very large messages is not only very interesting but actually vital for a lot of applications. In our BizTalk workshops we've had quite a few customers who exchange catalogs for engineering parts with other parties. These catalogs easily exceed 1GB in size on the wire. If you want to expand those messages up into a DOM you've got a problem. Consequently, neither WSE nor ASMX nor BizTalk Server nor any other DOM based solution that isn't running on a well equipped 64-bit box can successfully handle such real-customer-scenario messages. Once messages support streaming, you have that sort of flexibility.

The problem that remains with XmlReader is that once you touch the body, things get a bit more complex than with a DOM representation. The XmlReader is a "read once" construct that usually can't be reset to its initial state. That is specifically true if the reader sits on top of a network stream and returns the translated bits as they arrive. Once you touch the message content is the infrastructure, the message is therefore "consumed" and can't be used for further processing. The good news is, though, that if you buffer the message content into a DOM, you can layer an XmlNodeReader over the DOM's document element and forward the message with that reader. If you only need to read parts of the message or if you don't want to use the DOM, you can layer a custom XML reader over a combination of your buffer data and the original XmlReader.

Categories: Technology | Indigo | Web Services

January 24, 2004
@ 09:54 AM

The Microsoft Developer Days 2004 in Den Haag (The Hague) were a great event. Not so much fun was going there (the train from Utrecht was split in two trains on the way and I ended up in Rotterdam instead of Den Haag at first) and getting back (the train from Venlo to Düsseldorf simply didn't go because of "technical difficulties" so I had to take a rather expensive cab home). 

I've had lots of interesting discussions and the result of one was that I might be speaking at the SDGN's CttM conference. I'll definitely be back for the second run of the Architect's Forum in Zeewolde in March 29th.  

De SDGN heft gezegt dat ik nu moet genoeg Nederlands lere omdat ik mij CttM presentatie in de Nederlandse taal kan doen, maar ik weet niet of ze bereid zijn om mij zovel tijd voor een presentatie te geve zo dat ik ook lang genoeg voor iede enkele woord kan zoeke. :)     

My talk on Indigo apparently went well for the audience and one of my fellow RDs even said that he learned more about Indigo in my talk than at the PDC (that's because I consolidated the PDC slides and therefore have it "all at once"), but personally I was a bit unhappy with it. Didn't flow right. Two slides too much, one slide missing (I need to explain "Dialogs"). This will be fixed for the next stop in Oslo on Monday.

Categories: Talks | EMEA Longhorn Preview | Indigo

I leaving shortly for Den Haag for the first installment of the Longhorn Developer Preview Tour throughout Europe as part of the Dutch Developer Days 2004. We start tomorrow and I am quite excited since this is the first time I will speak about Indigo in any detail to a larger audience. I've witnessed Indigo "forming" from a distance when the team was still in "stealth mode" and it's great to see how it comes along.

But be forewarned: In my talk there will be no live demos. I have 75 minutes for the talk and I had to decide whether I concentrate on explaining the "M5" milestone that is currently in development in Redmond and which implements the (likely) final programming model or whether I allocate more time to the M4 model found in the PDC build. The decision that I made was that M4 is so different from M5 that unless you want to get a major degree in Longhorn development history or have way too much time on your hands, learning and therefore showing M4 code is almost pointless. I will show code, but it won't run.

If you want to check out how this first run of my talk goes (as usual, I don't really rehearse talks so this is as spontaneous, "fresh" and probably embarrassing as it gets on this tour), Microsoft Netherlands will have a live webcast tomorrow that you can log into at http://www.microsoft.com/netherlands/msdn/devdays/webcast.asp.

Categories: Talks | EMEA Longhorn Preview | Indigo

November 30, 2003
@ 06:14 PM

I'll put together the v1.5 build version of dasBlog next week. The v1.4 "PDC build" proved to be "true to the spirit of PDC bits" and turned out to have a couple of problems with the new "dasBlog" theme and some other inconveniences that v1.5 will fix. The true heroes of v1.5 are Omar and the many other frequent contributors to the workspace; I just didn't have enough time to add features recently.

As I blogged last week, I am very busily involved in a exciting (mind that I use the word not as carelessly as some marketing types) infrastructure project on service-oriented architectures, automnomous computing an agile machines. I wrote some 50 pages of very dense technical specification and a lot of "proof of concept" code in the past two weeks and we're in the process of handing this off to the development team. I am having a great time and a lot of fun, but because the schedule is insanely tight for a variety of reasons (I am not complaining, I signed it knowingly), I've been on 16 hour days for most of the past two weeks.  In some ways, this is also an Indigo project, because I am loosely aligning some of my core architecture with a few fundamentals from the Indigo connector architecture published at PDC to that we can take full advantage of Indigo once it's ready. The Indigo idea of keeping the Message body in an XmlReader is an ingenious idea for what I am doing here. In essence, if you only need to look at the headers inside an intermediary in a one-way messaging infrastructure like the one I am building right now, you may never even need to look anything from the body until you push the resulting message out again. So why suck it into a DOM? Just map the input stream to the output stream and hand the body through as you get it. That way and under certain circumstances, my bits may already be forwarding a message to the next hop when it hasn't even fully arrived yet.

One of the "innovative approaches" (for me, at least) is that within this infrastructure, which has a freely composable, nestable pipeline of "aspects", I am using my lightweight transaction manager to coordinate the failure management of such independently developed components. The difficulty of that and the absence of an "atomic" property of a composite pipeline activity are two things that bugged me most about aspects. There's a lot more potential in this approach, for instance enforcement of composition rules. It works great in theory and in the prototype code and I am curious how that turns out once it hits a real life use-case. We're getting there soon. (My first loud thinking about something like this is was at the very bottom of this rant here.) I'll keep you posted.

In unrelated news: Because I know that I'll be doing a lot of Longhorn work and demos in the upcoming months (my Jan/Feb/Mar schedule looks like I am going to visit every EMEA software developer personally), I've meanwhile figured that my loyal and reliable digital comrade (a Dell Inspiron 8100) will be retired. Its successor will have a green chassis.

Categories: dasBlog | Indigo | Web Services

November 11, 2003
@ 10:55 PM

Joe Long, the Product Unit Manager for XML Enterprise Services at Microsoft, talks about the Indigo migration story in this recorded presentation on MSDN. If you weren't at Joe's PDC talk and think you don't have 37 minutes time for this, you can still not afford to miss listening to the prescriptive guidance section starting at slide 60, if you ever have or will cross an application domain boundary with a Remoting, Enterprise Services or Web service call on the current stacks. And now leave here and go there.

Categories: Indigo

Between PDC and now, I was in Redmond on Monday and Tuesday at a meeting with the Indigo team. One of the topics discussed were the new transaction management capabilities that are part of Indigo (which, for Longhorn, includes a lightweight transaction manager).

Ingo was there, too, and we had a little argument about how hard it is to write transaction resource managers. Ingo thought that it would be awfully hard to write them and that average programmers would never do so and wouldn’t see the need for them. I said “hey, it’s really trivial”, explained that I consider transactions a very general programming paradigm for much more than just databases and told him that I would write a little demo to prove it. I wrote the demo on the plane going home in about 3 hours. Here it is.

The “2 Phase Commit Puzzle” application is a little Windows Forms puzzle that doesn’t use Indigo or the Longhorn bits, but rather employs a little lightweight 2PC transaction manager that Steve Swartz and myself hacked up when we were on our Scalable Applications tour this spring.

The puzzle uses four resource managers (transaction participants). The TileWorker keeps track of the tiles as they are moved around, always votes “yes” on Prepare, does nothing on Commit and rolls all tiles back into their original (shuffled) state on Abort. The TimeoutWorker votes “yes” if the puzzle is completed (pressing the “Done” button) within the preset time-span and “no” otherwise. It does nothing on either Commit or Abort otherwise. The GridWorker votes “yes” on Prepare if the puzzle is completed (order is correct) and otherwise “no”. It also does nothing on Commit or Abort. The OutcomeContingentMessage is a participant that will always vote “yes” on Prepare and shows a “Congratulations” message on Commit and a “You failed!” message on Abort.

The great thing about this little puzzle is that I could add arbitrary other success/failure conditions for the outcome of the puzzle (e.g. number of moves) without having to rewrite or even touch the code determining the other conditions or the code emitting the result message. I would just have to hook in the new resource and feed it with information from the grid.

Transactions aren’t just for databases. The discussion of the theory behind this is of course in our already well-known transaction deck.

Below are the download links to the game executable (in IE, you need to right-click and save it to the local disk; it will not work if started directly from IE) and the source code archive, including the simple WorkSet transaction manager. Check it out.

Download: newtelligence.TxPuzzle.exe
Download: newtelligence.TxPuzzle.zip

Categories: Indigo

October 28, 2003
@ 07:55 PM

The typical PDC attendee is very special. PDC is not like TechEd where you get very practical information on today’s shipping products. PDC is about futures and it requires a lot of imagination of how applications could look and could work on the new platform. It’s about building excitement for the things to come. PDC attendees are the folks who will make the first wave of applications happen. They are excited about technology and they love to code.

Don Box’ talk yesterday afternoon (WSV201) was very much about now. I heard a few people complain that he didn’t show enough new code. I don’t think he should have. I found his talk very important and Don delivered his message very well. Don’s talk was very much about architecture. No matter how much you want to see code, it’s not the 1990’s anymore. Simply hacking up an app won’t let you play in a connected application ecosystem that’s powered by Web services. WinFX will enable better applications by simplifying coding complex applications in a big way and making developers more productive. You’ll code less. Code isn’t all that matters. Architecture matters. Negotiation and contracts matter. Design matters.

There were four key takeaways from his talk: Boundaries between applications are explicit. Indigo’s programming model is different from previous distributed programming models such as COM and Remoting, because it doesn’t make objects implicitly remote. You need to declare things as being remote. The fact that you’re theoretically able to write a local application and can then write a configuration script that distributes this application across multiple machines using Remoting was a naïve approach. Likewise, writing a COM application that’s built as a local application and reconfiguring it to run as a distributed application using a different registry setup is a naïve approach. With Indigo, you will need to start writing applications explicitly as being remote. If you love objects, you will find a few things very restricting in this world, and at first sight. There are no automatically marshaled callbacks, interfaces and objects. There are messages, not object references going across the wire. The endpoints of communication, called services, aren’t fragments of the same application based on the same types and classes. Services are autonomous units which adhere to compatible data contracts and policy, not dependent units that use identical implementations. We share schema, not type.

Don recommended, as I’ve done earlier here on the blog, one of the most important Indigo talks for anyone who’s building software on today’s platform (that means: everyone at PDC): WSV203, “Indigo”: Connected Application Technology Roadmap; Wednesday, 11:30am, 409AB.  Go.

Categories: PDC 03 | Indigo

Here’s my quick, two sentence definition of Indigo in order to give you an idea about the scope of this thing:

Indigo is the successor technology and the consolidation of DCOM, COM+, Enterprise Services, Remoting, ASP.NET Web Services (ASMX), WSE, and the Microsoft Message Queue. It provides services for building distributed systems all the way from simplistic cross-appdomain message passing and ORPC to cross-platform, cross-organization, vastly distributed, service-oriented architectures providing reliable, secure, transactional, scalable and fast, online or offline, synchronous and asynchronous XML messaging.

Categories: PDC 03 | Indigo

My good friend Steve Swartz is giving blogging a second try and this time for real. Given that the stuff he's been working on is/was in the stealthier areas of the Indigo effort (not the public WS-* specs), it was pretty difficult for him to blog about work, but now with PDC things are changing.

In an effort to get newtelligence's PDC T-Shirt, Doug Purdy switched from Radio to dasBlog as well.

These two blogs will be very interesting places to watch if you are interested in the Indigo programming model.

Doug Purdy is the Program Manager for the new serialization framework (which consolidates XmlSerializer, BinaryFormatter and SoapFormatter), Steve Swartz drives the Indigo programming model that all of us will use.

Categories: PDC 03 | Indigo

Brad More is asking whether and why he should use Enterprise Services.

Brad, if you go to the PDC, you can get the definitive, strategic answer on that question in this talk:

“Indigo”: Connected Application Technology Roadmap
Track: Web/Services   Code: WSV203
Room: Room 409AB   Time Slot: Wed, October 29 11:30 AM-12:45 PM
Speakers: Angela Mills, Joe Long

Joe Long is Product Unit Manager for Enterprise Services at Microsoft, a product unit that is part of the larger Indigo group. The Indigo team owns Remoting, ASP.NET Web Services, Enterprise Services, all of COM/COM+ and everything that has to do with Serialization.

And if you want to hear the same song sung by the technologyspeakmaster, go and hear Don:

“Indigo": Services and the Future of Distributed Applications
Track: Web/Services   Code: WSV201
Room: Room 150/151/152/153   Time Slot: Mon, October 27 4:45 PM-6:00 PM
Speaker: Don Box

If you want to read the core message right now, just scroll down here. I've been working directly with the Indigo folks on the messaging for my talks at TechEd in Dallas earlier this year as part of the effort of setting the stage for Indigo's debut at the PDC.

I'd also suggest that you don't implement your own ES clone using custom channel sinks, context sinks, or formatters and ignore the entire context model of .NET Remoting if you want to play in Indigo-Land without having to rewrite a large deal of your apps. The lack of security support of Remoting is not a missing feature; Enterprise Services is layered on top of Remoting and provides security. The very limited scalability of Remoting on any transport but cross-appdomain is not a real limitation; if you want to scale use Enterprise Services. Check out this page from my old blog for a few intimate details on transport in Enterprise Services.

ASMX is the default, ES ist the fall-back strategy if you need the features or the performance and Remoting the the cheap, local ORPC model. 

If you rely on ASMX and ES today, you'll have a pretty smooth upgrade path. Take that expectation with you and go to Joe's session.

[PS: What I am saying there about ES marshaling not using COM/Interop is true except for two cases that I found later: Queued Components and calls with isomorphic call signatures where the binary representation of COM and the CLR is identical - like with a function that passes and returns only ints. The reason why COM/Interop is used in those cases is very simple: it's a lot faster.] 

Categories: PDC 03 | Technology | COM | Enterprise Services | Indigo