This session is a followup to the Service Bus session that I did at the build conference and explains advanced usage patterns:

Categories: .NET Services | AppFabric | Azure | Talks | MSMQ | Web Services

From //build in Anaheim

Categories: AppFabric | Architecture | SOA | ISB | Web Services

Our team’s Development Manager MK (Murali Krishnaprasad) and me were interviewed by Michael Washam on May 2011 CTP release of Windows Azure AppFabric. We discuss new technologies such as Topics, Queues, Subscriptions and how this relates to doing async development in the cloud.

 

Republished from Channel 9

Categories: AppFabric | Architecture | ISB | Web Services

My PDC10 session is available online (it was pre-recorded). I talk about the new ‘Labs’ release that we released into the datacenter this week and about a range of future capabilities that we’re planning for Service Bus. Some of those future capabilities that are a bit further out are about bringing back some popular capabilities from back in the .NET Services incubation days (like Push and Service Orchestration), some are entirely new.

One important note about the new release at http://portal.appfabriclabs.com – for Service Bus, this is a focused release that provides mostly only new features and doesn’t provide the full capability scope of the production system and SDK. The goal here is to provide insight into an ongoing development process and opportunity for feedback as we’re continuing to evolve AppFabric. So don’t derive any implications from this release on what we’re going to do with the capabilities already in production.

Click here to go to the talk.

Categories: AppFabric | Azure | Technology | Web Services

Book cover of Programming WCF Services

Juval Löwy’s very successful WCF book is now available in its third edition – and Juval asked me to update the foreword this time around. It’s been over three years since I wrote the foreword to the first edition and thus it was time for an update since WCF has moved on quite a bit and the use of it in the customer landscape and inside of MS has deepened where we’re building a lot of very interesting products on top of the WCF technology across all businesses – not least of which is the Azure AppFabric Service Bus that I work on and that’s entirely based on WCF services.

You can take a peek into the latest edition at the O’Reilly website and read my foreword if you care. To be clear: It’s the least important part of the whole book :-)

Categories: AppFabric | Azure | WCF | Web Services

In case you need a refresher or update about the things me and our team work on at Microsoft, go here for a very recent and very good presentation by my PM colleague Maggie Myslinska from TechEd Australia 2010 about Windows Azure AppFabric with Service Bus demos and a demo of the new Access Control V2 CTP

Categories: AppFabric | SOA | Azure | Technology | ISB | WCF | Web Services

I put the slides for my talks at NT Konferenca 2010 on SkyDrive. The major difference from my APAC slides is that I had to put compute and storage into one deck due to the conference schedule, but instead of purely consolidating and cutting down the slide count,  I also incorporated some common patterns coming out from debates in Asia and added slides on predictable and dynamic scaling as well as on multitenancy. Sadly, I need to rush through all that in 45 minutes today.

 

Categories: AppFabric | Architecture | Azure | Talks | Technology | Web Services

My office neighbor, our Service Bus Test Lead Vishal Chowdhary put together a bundle of code and documentation for how to use Service Bus with Server AppFabric and IIS 7.5. Here: http://code.msdn.microsoft.com/ServiceBusDublinIIS

Categories: AppFabric | Azure | Web Services

seht Euch mal die Wa an, wie die Wa ta kann. Auf der Mauer auf der Lauer sitzt ‘ne kleine Wa!.

It’s a German children’s song. The song starts out with “… sitzt ‘ne kleine Wanze” (bedbug) and with each verse you leave off a letter: Wanz, Wan, Wa, W, – silence.

I’ll do the same here, but not with a bedbug:

Let’s sing:

<soap:Envelope xmlns:soap=”” xmlns:wsaddr=”” xmlns:wsrm=”” xmlns:wsu=”” xmlns:app=””>
   <soap:Header>
         <addr:Action>http://tempuri.org/1.0/Status.set</addr:Action>
         <wsrm:Sequence>
              <wsrm:Identifier>urn:session-id</wsrm:Identifier>
              <wsrm:MessageNumber>5</wsrm:MessageNumber>
          </wsrm:Sequence>
          <wsse:Security xmlns:wsse=”…”>
               <wsse:BinarySecurityToken ValueType="
http://tempuri.org#CustomToken"
                                         EncodingType="...#Base64Binary" wsu:Id=" MyID ">
                          FHUIORv...
                </wsse:BinarySecurityToken>
               <ds:Signature>
                  <ds:SignedInfo>
                      <ds:CanonicalizationMethod Algorithm="
http://www.w3.org/2001/10/xml-exc-c14n#"/>
                      <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#md5"/
                      <ds:Reference URI="#MsgBody">
                            <ds:DigestMethod  Algorithm="
http://www.w3.org/2000/09/xmldsig#md5"/> 
                            <ds:DigestValue>LyLsF0Pi4wPU...</ds:DigestValue>
                      </ds:Reference>
                 </ds:SignedInfo>  
                 <ds:SignatureValue>DJbchm5gK...</ds:SignatureValue>
                 <ds:KeyInfo> 
                  <wsse:SecurityTokenReference> 
                    <wsse:Reference URI="#MyID"/>
                   </wsse:SecurityTokenReference>
               </ds:KeyInfo>
             </ds:Signature>
         </wsse:Security>
         <app:ResponseFormat>Xml</app:ResponseFormat>
         <app:Key wsu:Id=”AppKey”>27729912882….</app:Key>
    <soap:Header>
    <soap:Body wsu:Id=”MyId”>
          <app:status>Hello, I’m good</app:status>
     </soap:Body>
</soap:Envelope>

Not a very pretty song, I’ll admit. Let’s drop a some stuff. Let’s assume that we don’t need to tell the other party that we’re looking to give it an MD5 signature, but let’s say that’s implied and so were the canonicalization algorithm. Let’s also assume that the other side already knows the security token and the key. Since we only have a single signature digest here and yield a single signature we can just collapse to the signature value. Heck, you may not even know about what that all means. Verse 2:

<soap:Envelope xmlns:soap=”” xmlns:wsaddr=”” xmlns:wsrm=”” xmlns:wsu=”” xmlns:app=””>
   <soap:Header>
         <addr:Action>http://tempuri.org/1.0/Status.set</addr:Action>
         <wsrm:Sequence>
              <wsrm:Identifier>urn:session-id</wsrm:Identifier>
              <wsrm:MessageNumber>5</wsrm:MessageNumber>
          </wsrm:Sequence>
          <wsse:Security xmlns:wsse=”…”>
               <ds:Signature>
                  <ds:SignatureValue>DJbchm5gK...</ds:SignatureValue>
             </ds:Signature>
         </wsse:Security>
         <app:ResponseFormat>Xml</app:ResponseFormat>
         <app:Key wsu:Id=”AppKey”>27729912882….</app:Key>
    <soap:Header>
    <soap:Body wsu:Id=”MyId”>
          <app:status>Hello, I’m good</app:status>
     </soap:Body>
</soap:Envelope>

Better. Now let’s strip all these extra XML namespace decorations since there aren’t any name collisions as far as I can see. We’ll also collapse the rest of the security elements into one element since there’s no need for three levels of nesting with a single signature. Verse 3:

<Envelope>
   <Header>
         <Action>http://tempuri.org/1.0/Status.set</Action>
         <Sequence>
              <Identifier>urn:session-id</Identifier>
              <MessageNumber>5</MessageNumber>
          </Sequence>
          <SignatureValue>DJbchm5gK...</SignatureValue>
          <ResponseFormat>Xml</ResponseFormat>
          <Key>27729912882….</Key>
    <Header>
    <Body>
       <status>Hello, I’m good</status>
     </Body>
</Envelope>

Much better. The whole angle-bracket stuff and the nesting seems semi-gratuitous and repetitive here, too. Let’s make that a bit simpler. Verse 4:

         Action=http://tempuri.org/1.0/Status.set
         Sequence-Identifier=urn:session-id
         Sequence-MessageNumber=5
         SignatureValue=DJbchm5gK...
         ResponseFormat=Xml
         Key=27729912882….
         status=Hello, I’m good

Much, much better. Now let’s get rid of that weird URI up there and split up the action and the version info, make some of these keys are little more terse and turn that into a format that’s easily transmittable over HTTP. By what we have here application/www-form-urlencoded would probably be best. Verse 5:

         method=Status.set
         &v=1.0
         &session_key=929872172..
         &call_id=5
         &sig=DJbchm5gK...
         &format=Xml
         &api_key=27729912882….
         &status=Hello,%20I’m%20good

Oops. Facebook’s Status.set API. How did that happen? I thought that was REST?

Now play the song backwards. The “new thing” is largely analogous to where we started before the WS* Web Services stack and its CORBA/DCE/DCOM predecessors came around and there are, believe it or not, good reasons for having of that additional “overhead”. A common way to frame message content and the related control data, a common way to express complex data structures and distinguish between data domains, a common way to deal with addressing in multi-hop or store-and-forward messaging scenarios, an agreed notion of sessions and message sequencing, a solid mechanism for protecting the integrity of messages and parts of messages. This isn’t all just stupid.

It’s well worth discussing whether messages need to be expressed as XML 1.0 text on the wire at all times. I don’t think they need to and there are alternatives that aren’t as heavy. JSON is fine and encodings like the .NET Binary Encoding or Fast Infoset are viable alternatives as well. It’s also well worth discussing whether WS-Security and the myriad of related standards that were clearly built by security geniuses for security geniuses really need to be that complicated or whether we could all live with a handful of simple profiles and just cut out 80% of the options and knobs and parameters in that land.

I find it very sad that the discussion isn’t happening. Instead, people use the “REST” moniker as the escape hatch to conveniently ignore any existing open standard for tunnel-through-HTTP messaging and completely avoid the discussion.

It’s not only sad, it’s actually a bit frustrating. As one of the people responsible for the protocol surface of the .NET Service Bus, I am absolutely not at liberty to ignore what exists in the standards space. And this isn’t a mandate handed down to me, but something I do because I believe it’s the right thing to live with the constraints of the standards frameworks that exist.

When we’re sitting down and talk about a REST API, were designing a set of resources – which may result in splitting a thing like a queue into two resources, head and tail - and then we put RFC2616 on the table and try to be very precise in picking the appropriate predefined HTTP method for a given semantic and how the HTTP 2xx, 3xx, 4xx, 5xx status codes map to success and error conditions. We’re also trying to avoid inventing new ways to express things for which standards exists. There’s a standard for how to express and manage lists with ATOM and APP and hence we use that as a foundation. We use the designed extension points to add data to those lists whenever necessary.

When we’re designing a RPC SOAP API, we’re intentionally trying to avoid inventing new protocol surface and will try to leverage as much from the existing and standardized stack as we possibly can – at a minimum we’ll stick with established patterns such as the Create/GetInfo/Renew/Delete patterns for endpoint factories with renewal (which is used in several standards). I’ll add that we are – ironically - a bit backlogged on the protocol documentation for our SOAP endpoints and have more info on the REST endpoint in the latest SDK, but we’ll make that up in the near future.

So - can I build “REST” (mind the quotes) protocols that are as reduced as Facebook, Twitter, Flickr, etc? Absolutely. There wouldn’t be much new work. It’s just a matter of how we put messages on and pluck message off the wire. It’s really mostly a matter of formatting and we have a lot of the necessary building blocks in the shipping WCF bits today. I would just omit a bunch of decoration as things go out and make a bunch of assumptions on things that come in.

I just have a sense that I’d be hung upside down from a tree by the press and the blogging, twittering, facebooking community if I, as someone at Microsoft, wouldn’t follow the existing open and agreed standards or at least use protocols that we’ve published under the OSP and instead just started to do my own interpretative dance - even if that looked strikingly similar to what the folks down in the Valley are doing. At the very least, someone would call it a rip-off.

What do you think? What should I/we do?

Categories: .NET Services | Architecture | Azure | Technology | ISB | Web Services

April 3, 2008
@ 06:10 AM

Earlier today I hopefully gave a somewhat reasonable, simple answer to the question "What is a Claim?" Let's try the same with "Token":

In the WS-* security world, "Token" is really just a another name the security geniuses decided to use for "Handy package for all sorts of security stuff". The most popular type of token is the SAML (just say "samel") token. If the ladies and gentlemen designing and writing security platform infrastructure and frameworks are doing a good job you might want to know about the existence of such a thing, but otherwise be blissfully ignorant of all the gory details.

Tokens are meant to be a thing that you need to know about in much the same way you need to know about ... ummm... rebate coupons you can cut out of your local newspaper or all those funny books that you get in the mail. I have really no idea how the accounting works behind the scenes between the manufacturers and the stores, but it really doesn't interest me much, either. What matters to me is that we get $4 off that jumbo pack of diapers and we go through a lot of those these days with a 9 month old baby here at home. We cut out the coupon, present it at the store, four bucks saved. Works for me.

A token is the same kind of deal. You go to some (security) service, get a token, and present that token to some other service. The other service takes a good look at the token and figures whether it 'trusts' the token issuer and might then do some further inspection; if all is well you get four bucks off. Or you get to do the thing you want to do at the service. The latter is more likely, but I liked the idea for a moment.

Remember when I mentioned the surprising fact that people lie from time to time when I wrote about claims? Well, that's where tokens come in. The security stuff in a token is there to keep people honest and to make 'assertions' about claims. The security dudes and dudettes will say "Err, that's not the whole story", but for me it's good enough. It's actually pretty common (that'll be their objection) that there are tokens that don't carry any claims and where the security service effectively says "whoever brings this token is a fine person; they are ok to get in". It's like having a really close buddy relationship with the boss of the nightclub when you are having troubles with the monsters guarding the door. I'm getting a bit ahead of myself here, though.

In the post about claims I claimed that "I am authorized to approve corporate acquisitions with a transaction volume of up to $5Bln". That's a pretty obvious lie. If there was such a thing as a one-click shopping button for companies on some Microsoft Intranet site (there isn't, don't get any ideas) and I were to push it, I surely should not be authorized to execute the transaction. The imaginary "just one click and you own Xigg" button would surely have some sort of authorization mechanism on it.

I don't know what Xigg is assumed to be worth these days, but there is actually be a second authorization gate to check. I might indeed be authorized to do one-click shopping for corporate acquisitions, but even with my made-up $5Bln limit claim, Xigg may just be worth more that I'm claiming I'm authorized to approve. I digress.

How would the one-click-merger-approval service be secured? It would expect some sort of token that absolutely, positively asserts that my claim "I am authorized to approve corporate acquisitions with a transaction volume of up to $5Bln" is truthful and the one-click-merger-approval service would have to absolutely trust the security service that is making that assertion. The resulting token that I'm getting from the security service would contain the claim as an attribute of the assertion and that assertion would be signed and encrypted in mysterious (for me) yet very secure and interoperable ways, so that I can't tamper with it as much as I look at the token while having it in hands.

The service receiving the token is the only one able to crack the token (I'll get to that point in a later post) and look at its internals and the asserted attributes. So what if I were indeed authorized to spend a bit of Microsoft's reserves and I were trying to acquire Xigg at the touch of a button and, for some reason I wouldn't understand, the valuation were outside my acquisition limit? That's the service's job. It'd look at my claim, understand that I can't spend more than $5Bln and say "nope!" - and it would likely send email to SteveB under the covers. Trouble.

Bottom line: For a client application, a token is a collection of opaque (and mysterious) security stuff. The token may contain an assertion (saying "yep, that's actually true") about a claim or a set of claims that I am making. I shouldn't have to care about the further details unless I'm writing a service and I'm interested in some deeper inspection of the claims that have been asserted. I will get to that.

Before that, I notice that I talked quite a bit about some sort of "security service" here. Next post...

Categories: Architecture | SOA | CardSpace | WCF | Web Services

April 2, 2008
@ 08:20 PM

If you ask any search engine "What is a Claim?" and you mean the sort of claim used in the WS-* security space, you'll likely find an answer somewhere, but that answer is just as likely buried in a sea of complex terminology that is only really comprehensible if you have already wrapped your head around the details of the WS-* security model. I would have thought that by now there would be a simple and not too technical explanation of the concept that's easy to find on the Web, but I haven't really had success finding one. 

So "What is a Claim?" It's really simple.

A claim is just a simple statement like "I am Clemens Vasters", or "I am over 21 years of age", or "I am a Microsoft employee", or "I work in the Connected Systems Division", or "I am authorized to approve corporate acquisitions with a transaction volume of up to $5Bln". A claim set is just a bundle of such claims.

When I walk up to a service with some client program and want to do something on the service that requires authorization, the client program sends a claim set along with the request. For the client to know what claims to send along, the service lets it know about its requirements in its policy.

When a request comes in, this imaginary (U.S.) service looks at the request knowing "I'm a service for an online game  promoting alcoholic beverages!". It then it looks at the claim set, finds the "I am over 21 years of age" claim and thinks "Alright, I think we got that covered".

The service didn't really care who was trying to get at the service. And it shouldn't. To cover the liquor company's legal behind, they only need to know that you are over 21. They don't really need to know (and you probably don't want them to know) who is talking to them. From the client's perspective that's a good thing, because the client is now in a position to refuse giving out (m)any clues about the user's identity and only provide the exact data needed to pass the authorization gate. Mind that the claim isn't the date of birth for that exact reason. The claim just says "over 21".

Providing control over what claims are being sent to a service (I'm lumping websites, SOAP, and REST services all in the same bucket here) is one of the key reasons why Windows CardSpace exists, by the way. The service asks for a set of claims, you get to see what is being asked for, and it's ultimately your personal, interactive decision to provide or refuse to provide that information.

The only problem with relying on simple statements (claims) of that sort is that people lie. When you go to the Jack Daniel's website, you are asked to enter your date of birth before you can proceed. In reality, it's any date you like and an 10-year old kid is easily smart enough to figure that out.

All that complex security stuff is mostly there to keep people honest. Next time ...

Categories: Architecture | SOA | CardSpace | WCF | Web Services

We love WS-* as much as we do love Web-Style services. I say "Web-style", full knowing that the buzzterm is REST. Since REST is an architectural style and not an implementation technology, it makes sense to make a distinction and, also, claiming complete RESTfulness for a system is actually a pretty high bar to aspire to. So in order to avoid monikers like POX or Lo-REST/Hi-REST, I just call it what it what this is all about to mere mortals whose don't have an advanced degree in HTTP Philosophy: Services that work like the Web - or Web-Style. That's not to say that a Web-Style service cannot be fully RESTful. It surely can be. But if all you want to do is GET to serve up data into mashups and manipulate your backend resources in some other way, that's up to you. Anyways....

Tomorrow at 10:00am (Session DEV03, Room Delfino 4101A), our resident Lo-REST/Hi-REST/POX/Web-Style Program Manager Steve Maine and our Architect Don Box will explain to you how to use the new Web-Style "Programmable Web" features that we're adding to the .NET Framework 3.5 to implement the server magic and the service-client magic to power all the user experience goodness you've seen here at MIX.

Navigating the Programmable Web
Speaker(s): Don Box - Microsoft, Steve Maine
Audience(s): Developer
RSS. ATOM. JSON. POX. REST. WS-*. What are all these terms, and how do they impact the daily life of a developer trying to navigate today’s programmable Web? Join us as we explore how to consume and create Web services using a variety of different formats and protocols. Using popular services (Flickr, GData, and Amazon S3) as case studies, we look at what it takes to program against these services using the Microsoft platform today and how that will change in the future.
If you are in Vegas for MIX, come see the session. I just saw the demo, it'll be good.
Categories: Talks | Technology | WCF | Web Services

Christian Weyer shows off the few lines of pretty straightforward WCF code & config he needed to figure out in order to set up a duplex conversation through BizTalk Services.

Categories: Architecture | SOA | BizTalk | WCF | Web Services | XML

Steve has a great analysis of what BizTalk Services means for Corzen and how he views it in the broader industry context.

Categories: Architecture | SOA | IT Strategy | Technology | BizTalk | WCF | Web Services

April 25, 2007
@ 03:28 AM

"ESB" (for "Enterprise Service Bus") is an acronym floating around in the SOA/BPM space for quite a while now. The notion is that you have a set of shared services in an enterprise that act as a shared foundation for discovering, connecting and federating services. That's a good thing and there's not much of a debate about the usefulness, except whether ESB is the actual term is being used to describe this service fabric or whether there's a concrete product with that name. Microsoft has, for instance, directory services, the UDDI registry, and our P2P resolution services that contribute to the discovery portion, we've got BizTalk Server as a scalable business process, integration and federation hub, we've got the Windows Communication Foundation for building service oriented applications and endpoints, we've got the Windows Workflow Foundation for building workflow-driven endpoint applications, and we have the Identity Platform with ILM/MIIS, ADFS, and CardSpace that provides the federated identity backplane.

Today, the division I work in (Connected Systems Division) has announced BizTalk Services, which John Shewchuk explains here and Dennis Pilarinos drills into here.

Two aspects that make the idea of a "service bus" generally very attractive are that the service bus enables identity federation and connectivity federation. This idea gets far more interesting and more broadly applicable when we remove the "Enterprise" constraint from ESB it and put "Internet" into its place, thus elevating it to an "Internet Services Bus", or ISB. If we look at the most popular Internet-dependent applications outside of the browser these days, like the many Instant Messaging apps, BitTorrent, Limewire, VoIP, Orb/Slingbox, Skype, Halo, Project Gotham Racing, and others, many of them depend on one or two key services must be provided for each of them: Identity Federation (or, in absence of that, a central identity service) and some sort of message relay in order to connect up two or more application instances that each sit behind firewalls - and at the very least some stable, shared rendezvous point or directory to seed P2P connections. The question "how does Messenger work?" has, from an high-level architecture perspective a simple answer: The Messenger "switchboard" acts as a message relay.

The problem gets really juicy when we look at the reality of what connecting such applications means and what an ISV (or you!) were to come up with the next cool thing on the Internet:

You'll soon find out that you will have to run a whole lot of server infrastructure and the routing of all of that traffic goes through your pipes. If your cool thing involves moving lots of large files around (let's say you'd want to build a photo sharing app like the very unfortunately deceased Microsoft Max) you'd suddenly find yourself running some significant sets of pipes (tubes?) into your basement even though your users are just passing data from one place to the next. That's a killer for lots of good ideas as this represents a significant entry barrier. Interesting stuff can get popular very, very fast these days and sometimes faster than you can say "Venture Capital".

Messenger runs such infrastructure. And the need for such infrastructure was indeed an (not entirely unexpected) important takeaway from the cited Max project. What looked just to be a very polished and cool client app to showcase all the Vista and NETFX 3.0 goodness was just the tip of a significant iceberg of (just as cool) server functionality that was running in a Microsoft data center to make the sharing experience as seamless and easy as it was. Once you want to do cool stuff that goes beyond the request/response browser thing, you easily end up running a data center. And people will quickly think that your application sucks if that data center doesn't "just work". And that translates into several "nines" in terms of availability in my book. And that'll cost you.

As cool as Flickr and YouTube are, I don't think of none of them or their brethren to be nearly as disruptive in terms of architectural paradigm shift and long-term technology impact as Napster, ICQ and Skype were as they appeared on the scene. YouTube is just a place with interesting content. ICQ changed the world of collaboration. Napster's and Skype's impact changed and is changing entire industries. The Internet is far more and has more potential than just having some shared, mashed-up places where lots of people go to consume, search and upload stuff. "Personal computing" where I'm in control of MY stuff and share between MY places from wherever I happen to be and NOT giving that data to someone else so that they can decorate my stuff with ads has a future. The pendulum will swing back. I want to be able to take a family picture with my digital camera and snap that into a digital picture frame at my dad's house at the push of a button without some "place" being in the middle of that. The picture frame just has to be able to stick its head out to a place where my camera can talk to it so that it can accept that picture and know that it's me who is sending it.

Another personal, and very concrete and real point in case: I am running, and I've written about that before, a custom-built (software/hardware) combo of two machines (one in Germany, one here in the US) that provide me and my family with full Windows Media Center embedded access to live and recorded TV along with electronic program guide data for 45+ German TV channels, Sports Pay-TV included. The work of getting the connectivity right (dynamic DNS, port mappings, firewall holes), dealing with the bandwidth constraints and shielding this against unwanted access were ridiculously complicated. This solution and IP telephony and video conferencing (over Messenger, Skype) are shrinking the distance to home to what's effectively just the inconvenience of the time difference of 9 hours and that we don't see family and friends in person all that often. Otherwise we're completely "plugged in" on what's going on at home and in Germany in general. That's an immediate and huge improvement of the quality of living for us, is enabled by the Internet, and has very little to do with "the Web", let alone "Web 2.0" - except that my Program Guide app for Media Center happens to be an AJAX app today. Using BizTalk Services would throw out a whole lot of complexity that I had to deal with myself, especially on the access control/identity and connectivity and discoverability fronts. Of course, as I've done it the hard way and it's working to a degree that my wife is very happy with it as it stands (which is the customer satisfaction metric that matters here), I'm not making changes for technology's sake until I'm attacking the next revision of this or I'll wait for one of the alternative and improving solutions (Orb is on a good path) to catch up with what I have.

But I digress. Just as much as the services that were just announced (and the ones that are lined up to follow) are a potential enabler for new Napster/ICQ/Skype type consumer space applications from innovative companies who don't have the capacity or expertise to run their own data center, they are also and just as importantly the "Small and Medium Enterprise Service Bus".

If you are an ISV catering shrink-wrapped business solutions to SMEs whose network infrastructure may be as simple as a DSL line (with dynamic IP) that goes into a (wireless) hub and is as locked down as it possibly can be by the local networking company that services them, we can do as much as we want as an industry in trying to make inter-company B2B work and expand it to SMEs; your customers just aren't playing in that game if they can't get over these basic connectivity hurdles.

Your app, that lives behind the firewall shield and NAT and a dynamic IP, doesn't have a stable, public place where it can publish its endpoints and you have no way to federate identity (and access control) unless you are doing some pretty invasive surgery on their network setup or you end up building and running run a bunch of infrastructure on-site or for them. And that's the same problem as the mentioned consumer apps have. Even more so, if you look at the list of "coming soon" services, you'll find that problems like relaying events or coordinating work with workflows are very suitable for many common use-cases in SME business applications once you imagine expanding their scope to inter-company collaboration.

So where's "Megacorp Enterprises" in that play? First of all, Megacorp isn't an island. Every Megacorp depends on lots of SME suppliers and retailers (or their equivalents in the respective lingo of the verticals). Plugging all of them directly into Megacorp's "ESB" often isn't feasible for lots of reasons and increasingly less so if the SME had a second or third (imagine that!) customer and/or supplier. 

Second, Megacorp isn't a uniform big entity. The count of "enterprise applications" running inside of Megacorp is measured in thousands rather than dozens. We're often inclined to think of SAP or Siebel when we think of enterprise applications, but the vast majority are much simpler and more scoped than that. It's not entirely ridiculous to think that some of those applications runs (gasp!) under someone's desk or in a cabinet in an extra room of a department. And it's also not entirely ridiculous to think that these applications are so vertical and special that their integration into the "ESB" gets continuously overridden by someone else's higher priorities and yet, the respective business department needs a very practical way to connect with partners now and be "connectable" even though it sits deeply inside the network thicket of Megacorp. While it is likely on every CIO's goal sheet to contain that sort of IT anarchy, it's a reality that needs answers in order to keep the business bring in the money.

Third, Megacorp needs to work with Gigacorp. To make it interesting, let's assume that Megacorp and Gigacorp don't like each other much and trust each other even less. They even compete. Yet, they've got to work on a standard and hence they need to collaborate. It turns out that this scenario is almost entirely the same as the "Panic! Our departments take IT in their own hands!" scenario described above. At most, Megacorp wants to give Gigacorp a rendezvous and identity federation point on neutral ground. So instead of letting Gigacorp on their ESB, they both hook their apps and their identity infrastructures into the ISB and let the ISB be the mediator in that play.

Bottom line: There are very many solution scenarios, of which I mentioned just a few, where "I" is a much more suitable scope than "E". Sometimes the appropriate scope is just "I", sometimes the appropriate scope is just "E". They key to achieve the agility that SOA strategies commonly promise is the ability to do the "E to I" scale-up whenever you need it in order to enable broader communication. If you need to elevate one or a set services from your ESB to Internet scope, you have the option to go and do so as appropriate and integrated with your identity infrastructure. And since this all strictly WS-* standards based, your "E" might actually be "whatever you happen to run today". BizTalk Services is the "I".

Or, in other words, this is a pretty big deal.

Categories: Architecture | SOA | IT Strategy | Microsoft | MSDN | BizTalk | WCF | Web Services

I've spent the last 1 1/2 weeks doing one of the most fun (seriously) work assignments that each Program Manager of our team gets to do every once in a while: Servicing. So until yesterday night (I'm flying home to Germany today) I was in charge of ASP.NET Web Services and Remoting. An even though these technologies have been out there for quite a while now, there are still situations where stuff breaks and people are scratching their heads wondering what's going on. Overall, it was a very, very quiet time on the bug front though.

The one issue that we found on my watch is that you can configure ASP.NET Web Forms in a way that it breaks ASP.NET Web Services (ASMX). We are shipping one ASP.NET Web Page (.aspx) with ASMX and that unfortunate interaction manages to break that exact page with an error that's hard to figure out unless you have substantial ASP.NET knowledge and you have enough confidence in that knowledge to not trust us ;-)

If you globally override the autoEventWireup setting in the <page/> config element in the ASP.NET web.config and set that to "false", the DefaultWsdlHelpGenator.aspx page (which sits in the CONFIG directory of the Framework) becomes very unhappy and fails with a NullReferenceException, stating "Object reference not set to an instance of an object." and showing you some code that's definitely not yours.

What happened? Well, the file is missing a directive that overrides the override of the default. The fix is to go edit the DefaultWsdlHelpGenerator.aspx file and add the line:

<%@ Page AutoEventWireup="true" %>

That will fix the problem.

Now, the big question is: "Will you put that into a service pack?". While there's obviously a bug here, the answer is, in this particular case, "don't know yet". Replacing or editing that particular file is a potentially very impactful surgery done on the patched system given that the file is there in source code and in the config directory because you are supposed to be able to change it. Could we touch changed files? Probably not. Could we touch unchanged files? Probably? So how would you surface the difference and make sure that the systems we couldn't patch would not suffer from the particular bug? What's the test impact for the code and for the service pack or patch installer? How many people are actually using that ASP.NET config directive AND are hosting ASMX services in the same application and/or scope? Is it actually worth doing that? Making changes in code that has already shipped and is part of the Framework is serious business, since you are potentially altering the behavior of millions of machines all at once. So that part is definitely not done in an "agile" way, but takes quite a bit of consideration, while it takes just 10 seconds and notepad.exe for you.

Categories: ASP.NET | Web Services

December 20, 2006
@ 11:07 PM

It's been slashdotted and also otherwise widely discussed that Google has deprecated their SOAP API. A deadly blow for SOAP as people are speculating? Guess not.

What I find striking are the differences in the licenses between the AJAX API and the SOAP API. That's where the beef is. While the results obtained through the SOAP API can be used (for non-commercial purposes) practically in any way except that "you may not use the search results provided by the Google SOAP Search API service with an existing product or service that competes with products or services offered by Google.", the AJAX API is constrained to use with web sites with the terms of use stating that "The API is limited to allowing You to host and display Google Search Results on your site, and does not provide You with the ability to access other underlying Google Services or data."

The AJAX API is a Web service that works for Google because its terms of use are very prescriptive for how to build a service that ensures Google's advertising machine gets exposure and clicks. That's certainly a reasonable business decision, but has nothing to do with SOAP vs. REST or anything else technical. There's just no money in application-to-application messaging for Google (unless they'd actually set up an infrastructure to charge for software as a service and provide support and proper SLAs for it that is saying more than "we don't make any guarantees whatsoever") while there's a lot of money for them in being able to get lots and lots of people to give them a free spot on their own site onto which they can place their advertising. That's what their business is about, not software.

Categories: IT Strategy | Technology | Web Services

September 1, 2006
@ 09:00 PM

Indigo The Windows Communication Foundation's RC1 bits are now live. RC means "Release Candidate" and our team is really, really serious about this release being as close to what we intend to ship as we can ever get. Our database view with unresolved code-defects is essentially empty (there is a not more of a handful of small fixes for very esoteric scenarios that we're still doing for RTM). The time of breaking changes is absolutely and finally over for "WCF Version 1".

The team is very excited about this. There's lots of joy in the hallways. We're getting close to being done. Remember when you saw the first WS-* specs popping up out there some 6 years ago? That's when this thing was started. You can just imagine how pumped the testers, developers and program managers are around here. And even though I am new to the family, I get to celebrate a little too. Greatness.

Get the RC1 for the .NET Framework 3.0 with the WCF bits from here:
http://www.microsoft.com/downloads/details.aspx?FamilyId=19E21845-F5E3-4387-95FF-66788825C1AF&displaylang=en 

There's one little issue with the Visual Studio Tools aligned with that version, so it will take another day or so until those get uploaded.

As always, if you find problems, tell us: http://connect.microsoft.com/wcf

Categories: Indigo | WCF | Web Services

I've been quoted as to have said so at TechEd and I'll happily repeat it: "XML is the assembly language of Web 2.0", even though some (and likely some more) disagree. James Speer writes "Besides, Assembly Language is hard, XML isn’t." , which I have to disagree with.

True, throwing together some angle brackets isn't the hardest thing in the world, but beating things into the right shape is hard and probably even harder than in assembly. Yes, one can totally, after immersing oneself in the intricacies of Schema, write complex types and ponder for days and months about the right use of attributes and elements. It's absolutely within reach for a WSDL zealot to code up messages, portTypes and operations by hand. But please, if you think that's the right way to do things, I also demand that you write and apply your security policy in angle bracket notation from the top of your head and generate WCF config from that using svcutil instead of just throwing a binding together, because XML is so easy. Oh? Too hard? Well, it turns out that except for our developers and testers who are focusing on getting these mappings right, nobody on our product team would probably ever even want to try writing such a beast by hand for any code that sits above the deep-down guts of our stack. This isn't the fault of the specifications (or people here being ignorant), but it's a function of security being hard and the related metadata being complex. Similar things, even though the complexity isn't quite as extreme there, can be said about the other extensions to the policy framework such as WS-RM Policy or those for WS-AT.

As we're getting to the point where full range of functionality covered by WS-* specifications is due to hit the mainstream by us releasing WCF and our valued competitors releasing their respective implementations, hand-crafted contracts will become increasingly meaningless, because it's beyond the capacity of anyone whose job it is to build solutions for their customers to write complete set of contracts that not only ensures simple data interop but also protocol interop. Just as there were days that all you needed was assembly and INT21h to write a DOS program (yikes) or knowledge of "C" alongside stdio.h and fellows to write anything for everthing, things are changing now in the same way in Web Services land. Command of XSD and WSDL is no longer sufficient, all the other stuff is just as important to make things work.

Our WCF [DataContract] doesn't support attributes. That's a deliberate choice because we want to enforce simplicity and enhance interoperability of schemas. We put an abstraction over XSD and limit the control over it, because we want to simplify the stuff that goes across the wire. We certainly allow everyone to use the XmlSerializer with all of it's attribute based fine-grained control over schema, even though there are quite a few Schema constructs that even that doesn't support when building schema from such metadata. If you choose to, you can just ignore all of our serialization magic and fiddle with the XML Infoset outright and supply your own schema. However, XML and Schema are specifications that everyone and their dog wanted to get features into and Schema is hopelessly overengineered. Ever since we all (the industry, not only MS) boarded the SOAP/WS train, we're debating how to constrain the features of that monster to a reasonable subset that makes sense and the debate doesn't want to end.

James writes that he "take[s] a lot of care in terms of elements vs. attributes and mak[es] sure the structure of the XML is business-document-like", which only really makes sense if XML documents used in WS scenarios were meant for immediate human consumption, which they're not.

We want to promote a model that is simple and consistent to serialize to and from on any platform and that things like the differentiation between attributes and elements doesn't stand in the way of allowing a 1:1 mapping into alternate, non-XML serialization formats such as JSON or what-have-you (most of which don't care about that sort of differentiation).  James' statement about "business-document-like" structures is also interesting considering EDIFACT, X.12 or SWIFT, all of which only know records, fields and values, and don't care about that sort of subtle element/attribute differentation, either. (Yes, no of those might be "hip" any more, but they are implemented and power a considerable chunk of the world economy's data exchange).

By now, XML is the foundation for everything that happens on the web, and I surely don't want to have it go away. But have arrived at the point where matters have gotten so complicated that a layer of abstraction over pretty much all things XML has become a necessity for everyone who makes their money building customer solutions and not by teaching or writing about XML. In my last session at TechEd, I asked a room of about 200 people "Who of you hand-writes XSLT transforms?" 4 hands. "Who of you used to hand-write XSLT transforms?" 40+ hands. I think it's safe to assume that a bunch of those folks who have sworn off masochism and no longer hand-code XSLT are now using tools like the BizTalk Mapper or Altova's MapForce, which means that XSL/T is alive and kicking, but only downstairs in the basement. However, the abstractions that these tools provide also allow bypassing XSLT altogether and generate the transformation logic straight into compiled C++, Java, or C# code, which is what MapForce offers. WSDL is already walking down that path.

Categories: TechEd US | Indigo | WCF | Web Services

April 3, 2006
@ 03:39 PM

Mark, I care deeply about the hobbyist who writes some code on the side, the programmer who works from 9-5 and has a life and just as deeply about those who work 24/7 and about everybody in between ;-)

That said: now that we're getting close to being done with the "this vs. that" debate, we can most certainly figure out the "how can we optimize the programming experience" story. For very many people I've talked to in the past 4 years or so, reducing complexity is an important thing. I firmly believe that we can do enterprise messaging and Web-Style/Lo-REST/POX with a single technology stack that scales up and down in terms of its capabilities.  

Since I take that you are worried about code-bloat on the app-level, how would you think about the following client-side one-liners?

  • T data = Pox.Get<T>("myCfg")
  • T data = Pox.Get<T>("myCfg", new Uri("/customer/8929", UriKind.Relative));
  • T data = Pox.Get<T>("myCfg", new Uri("http: //example.com/customer/8929"));
  • T data = Pox.Get<T>(new Uri("http: //example.com/customer/8929"));
  • U reply = Pox.Put<T,U>( new Uri("http: //example.com/customer/8929"), data, ref location));
  • U reply = Pox.Post<T,U>( new Uri("http: //example.com/customer/"), data, out location));
  • Pox.Delete(settings, new Uri("http: //example.com/customer/8929"));

Whereby "myCfg" refers to a set of config to specify security, proxies, and so forth; settings would refer to an in-memory object with the same reusable info. Our stack lets me code that sort of developer experience in a quite straightforward fashion and I can throw SOAPish WS-Transfer under it and make the call flow on a reliable, routed TCP session with binary encoding without changing the least bit.

If I am still missing your point in terms of ease of use and line count, make a wish, Mark. :-)

Categories: Indigo | Web Services | XML

Inside the big house....

Back in December of last year and about two weeks before I publicly announced that I will be working from Microsoft, I started a nine-part series on REST/POX* programming with Indigo WCF. (1, 2, 3, 4, 5, 6, 7, 8, 9). Since then, the WCF object model has seen quite a few feature and usability improvements across the board and those are significant enough to justify that I rewrite the entire series to get it up to the February CTP level and I will keep updating it through Vista/WinFX Beta2 and as we are marching towards our RTM. We've got a few changes/extensions in our production pipeline to make the REST/POX story for WCF v1 stronger and I will track those changes with yet another re-release of this series.

Except in one or two occasions, I haven't re-posted a reworked story on my blog. This here is quite a bit different, because of it sheer size and the things I learned in the process of writing it and developing the code along the way. So even though it is relatively new, it's already due for an end-to-end overhaul to represent my current thinking. It's also different, because I am starting to cross-post content to http://blogs.msdn.com/clemensv with this post; however http://friends.newtelligence.net/clemensv remains my primary blog since that runs my engine ;-)

Listening

The "current thinking" is of course very much influenced by now working for the team that builds WCF instead of being a customer looking at things from the outside. That changes the perspective quite a bit. One great insight I gained is how non-dogmatic and customer-oriented our team is. When I started the concrete REST/POX work with WCF back in last September (on the customer side still working with newtelligence), the extensions to the HTTP transport that enabled this work were just showing up in the public builds and they were sometimes referred to as the "Tim/Aaaron feature". Tim Ewald and Aaron Skonnard had beat the drums for having simple XML (non-SOAP) support in WCF so loudly that the team investigated the options and figured that some minimal changes to the HTTP transport would enable most of these scenarios**. Based on that feature, I wrote the set of dispatcher extensions that I've been presenting in the V1 of this series and newtellivision as the applied example did not only turn out to be a big hit as a demo, it also was one of many motivations to give the REST/POX scenario even deeper consideration within the team.

REST/POX is a scenario we think about as a first-class scenario alongside SOAP-based messaging - we are working with the ASP.NET Atlas team to integrate WCF with their AJAX story and we continue to tweak the core WCF product to enable those scenarios in a more straightforward fashion. Proof for that is that my talk (PPT here) at the MIX06 conference in Las Vegas two weeks ago was entirely dedicated to the non-SOAP scenarios.

What does that say about SOAP? Nothing. There are two parallel worlds of application-level network communication that live in peaceful co-existence:

  • Simple point-to-point, request/response scenarios with limited security requirements and no need for "enterprise features" along the lines of reliable messaging and transaction integration.
  • Rich messaging scenarios with support for message routing, reliable delivery, discoverable metadata, out-of-band data, transactions, one-way and duplex, etcetc.

The Faceless Web

The first scenario is the web as we know it. Almost. HTTP is an incredibly rich application protocol once you dig into RFC2616 and look at the methods in detail and consider response codes beyond 200 and 404. HTTP is strong because it is well-defined, widely supported and designed to scale, HTTP is weak because it is effectively constrained to request/response, there is no story for server-to-client notifications and it abstracts away the inherent reliability of the transmission-control protocol (TCP). These pros and cons lists are not exhaustive.

What REST/POX does is to elevate the web model above the "you give me text/html or */* and I give you application/x-www-form-urlencoded" interaction model. Whether the server punts up markup in the form of text/html or text/xml or some other angle-bracket dialect or some raw binary isn't too interesting. What's changing the way applications are built and what is really creating the foundation for, say, AJAX is that the path back to the server is increasingly XML'ised. PUT and POST with a content-type of text/xml is significantly different from application/x-www-form-urlencoded. What we are observing is the emancipation of HTTP from HTML to a degree that the "HT" in HTTP is becoming a misnomer. Something like IXTP ("Interlinked XML Transport Protocol" - I just made that up) would be a better fit by now.

The astonishing bit in this is that there has been been no fundamental technology change that has been driving this. The only thing I can identify is that browsers other than IE are now supporting XMLHTTP and therefore created the critical mass for broad adoption. REST/POX rips the face off the web and enables a separation of data and presentation in a way that mashups become easily possible and we're driving towards a point where the browser cache becomes more of an application repository than merely a place that holds cacheable collateral. When developing the newtellivision application I have spent quite a bit of time on tuning the caching behavior in a way that HTML and script are pulled from the server only when necessary and as static resources and all actual interaction with the backend services happens through XMLHTTP and in REST/POX style. newtellivision is not really a hypertext website, it's more like a smart client application that is delivered through the web technology stack.

Distributed Enterprise Computing

All that said, the significant investments in SOAP and WS-* that were made my Microsoft and industry partners such as Sun, IBM, Tibco and BEA have their primary justification in the parallel universe of highly interoperable, feature-rich intra and inter-application communication as well as in enterprise messaging. Even though there was a two-way split right through through the industry in the 1990s with one side adopting the Distributed Computing Environment (DCE) and the other side driving the Common Object Request Broker Architecture (CORBA), both of these camps made great advances towards rich, interoperable (within their boundaries) enterprise communication infrastructures. All of that got effectively killed by the web gold-rush starting in 1994/1995 as the focus (and investment) in the industry turned to HTML/HTTP and to building infrastructures that supported the web in the first place and everything else as a secondary consideration. The direct consequence of the resulting (even if big) technology islands hat sit underneath the web and the neglect of inter-application communication needs was that inter-application communication has slowly grown to become one of the greatest industry problems and cost factors. Contributing to that is that the average yearly number of corporate mergers and acquisitions has tripled compared to 10-15 years ago (even though the trend has slowed in recent years) and the information technology dependency of today's corporations has grown to become one of the deciding if not the deciding competitive factor for an ever increasing number of industries.

What we (the industry as a whole) are doing now and for the last few years is that we're working towards getting to a point where we're both writing the next chapter of the story of the web and we're fixing the distributed computing story at the same time by bringing them both onto a commonly agreed platform. The underpinning of that is XML; REST/POX is the simplest implementation. SOAP and the WS-* standards elevate that model up to the distributed enterprise computing realm.

If you compare the core properties of SOAP+WS-Adressing and the Internet Protocol (IP) in an interpretative fashion side-by-side and then also compare the Transmission Control Protocol (TCP) to WS-ReliableMessaging it may become quite clear to you what a fundamental abstraction above the networking stacks and concrete technology coupling the WS-* specification family has become. Every specification in the long list of WS-* specs is about converging and unifying formerly proprietary approaches to messaging, security, transactions, metadata, management, business process management and other aspects of distributed computing into this common platform.

Convergence

The beauty of that model is that it is an implementation superset of the web. SOAP is the out-of-band metadata container for these abstractions. The key feature of SOAP is SOAP:Header, which provides a standardized facility to relay the required metadata alongside payloads. If you are willing to constrain out-of-band metadata to one transport or application protocol, you don't need SOAP.

There is really very little difference between SOAP and REST/POX in terms of the information model. SOAP carries headers and HTTP carries headers. In HTTP they are bolted to the protocol layer and in SOAP they are tunneled through whatever carries the envelope. [In that sense, SOAP is calculated abuse of HTTP as a transport protocol for the purpose of abstraction.] You can map WS-Addressing headers from and to HTTP headers.

The SOAP/WS-* model is richer, more flexible and more complex. The SOAP/WS-* set of specifications is about infrastructure protocols. HTTP is an application protocol and therefore it is naturally more constrained - but has inherently defined qualities and features that require an explicit protocol implementation in the SOAP/WS-* world; one example is the inherent CRUD (create, read, update, delete) support in HTTP that is matched by the explicitly composed-on-top WS-Transfer protocol in SOAP/WS-*

The common platform is XML. You can scale down from SOAP/WS-* to REST/POX by putting the naked payload on the wire and rely on HTTP for your metadata, error and status information if that suits your needs. You can scale up from REST/POX to SOAP/WS-* by encapsulating payloads and leverage the WS-* infrastructure for all the flexibility and features it brings to the table. [It is fairly straightforward to go from HTTP to SOAP/WS-*, and it is harder to go the other way. That's why I say "superset".]

Doing the right thing for a given scenario is precisely what are enabling in WCF. There is a place for REST/POX for building the surface of the mashed and faceless web and there is a place for SOAP for building the backbone of it - and some may choose to mix and match these worlds. There are many scenarios and architectural models that suit them. What we want is

One Way To Program

* REST=REpresentational State Transfer; POX="Plain-Old XML" or "simple XML"

Categories: Architecture | SOA | MIX06 | Technology | Web Services

December 11, 2004
@ 02:35 PM

The stack trace below (snapshot taken at a breakpoint in [WebMethod] "HelloWorld") shows that I am having quite a bit of programming fun these days. Server-side ASP.NET hooked up to a MSMQ listener.  

simpleservicerequestinweb.dll!SimpleServiceRequestInWeb.Hello.HelloWorld() Line 53 C#
system.web.services.dll!System.Web.Services.Protocols.LogicalMethodInfo.Invoke(System.Object target, System.Object[] values) + 0x92 bytes 
system.web.services.dll!System.Web.Services.Protocols.WebServiceHandler.Invoke() + 0x9e bytes 
system.web.services.dll!System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest() + 0x142 bytes 
system.web.services.dll!System.Web.Services.Protocols.SyncSessionlessHandler.ProcessRequest(System.Web.HttpContext context) + 0x6 bytes 
system.web.dll!CallHandlerExecutionStep.System.Web.HttpApplication+IExecutionStep.Execute() + 0xb4 bytes 
system.web.dll!System.Web.HttpApplication.ExecuteStep(System.Web.HttpApplication.IExecutionStep step, bool completedSynchronously) + 0x58 bytes 
system.web.dll!System.Web.HttpApplication.ResumeSteps(System.Exception error) + 0xfa bytes 
system.web.dll!System.Web.HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(System.Web.HttpContext context, System.AsyncCallback cb, System.Object extraData) + 0xe3 bytes 
system.web.dll!System.Web.HttpRuntime.ProcessRequestInternal(System.Web.HttpWorkerRequest wr) + 0x1e7 bytes 
system.web.dll!System.Web.HttpRuntime.ProcessRequest(System.Web.HttpWorkerRequest wr) + 0xb0 bytes 
newtelligence.enterprisetools.dll!newtelligence.EnterpriseTools.Msmq.MessageQueueAsmxDispatcher.MessageReceived(System.Object sender = {newtelligence.EnterpriseTools.Msmq.MessageQueueListener}, newtelligence.EnterpriseTools.Msmq.MessageReceivedEventArgs ea = {newtelligence.EnterpriseTools.Msmq.MessageReceivedEventArgs}) Line 33 C#
newtelligence.enterprisetools.dll!newtelligence.EnterpriseTools.Msmq.MessageQueueListener.ReceiveLoop() Line 305 + 0x2b bytes C#

Categories: ASP.NET | MSMQ | Web Services

The little series I am currently writing here on my blog has inspired me to write way too more code than actually necessary to get my point across ;-) So by now I've got my own MSMQ transport for WSE 2.0 (yes, I know that others have written that already, but I am shooting for a "enterprise strength" implementation), a WebRequest/WebResponse pair to smuggle under arbitrary ASMX proxies and I am more than halfway done with a server-side host for MSMQ-to-ASMX (spelled out: ASP.NET Web Services).

What bugs me is that WSE 2.0's messaging model is "asynchronous only" and that it always performs a push/pull translation and that there is no way to push a message through to a service on the receiving thread. Whenever I grab a message from the queue and put it into my SoapTransport's "Dispatch()" method, the message gets queued up in an in-memory queue and that is then, on a concurrent thread, pulled (OnReceiveComplete) by the SoapReceivers collection and submitted into ProcessMessage() of the SoapReceiver (like any SoapService derived implementation) matching the target endpoint. So while I can dequeue from MSMQ within a transaction scope (ServiceDomain), that transaction scope doesn't make it across onto the thread that will actually execute the action inside the SoapReceiver/SoapService.

So now I am sitting here, contemplating and trying to figure out a workaround that doesn't require me to rewrite a big chunk of WSE 2.0 (which I am totally not shy of if that is what it takes). Transaction marshaling, thread synchronization, ah, I love puzzles. Once I am know how to solve this and have made the adjustments, I'll post the queue listener I promised to wrap up the series. The other code I've written in the process will likely surface in some other way.

December 5, 2004
@ 02:33 PM

"My Lists", "My Photos", "My Profile" .... sounds all very familiar over there in MSN Spaces. So ... roll in the Web service interfaces, please.

Categories: Web Services | Weblogs | XML

I get emails like that very frequently. I have some news.

Short story: Microsoft is still willing and working to publish the application that I presented at TechEd Europe (see Benjamin's report) and they keep telling me that it will come out. Apparently there is a lot of consensus building to be done to get a big sample application out of the door. So there's nothing to be found on msdn, yet.

Little known secret: There are 15 lucky indivduals who have already received (hand-delivered) the Proseware code as a technical preview under a non-disclosure agreement. Because we (newtelligence) designed and wrote the sample application, we have permission to distribute the complete sample to participants of our SOA workshops and seminars.

So if you want to get yours hands on it, all you need to do is to send mail to training@newtelligence.com to sign up for one of the public events [Next published date is Dec 1-3, and the event is held in German, unless we get swamped with international inquiries] or you send email to the same address asking for an on-site workshop delivery. At this time, we (and MS) bind the code sample to workshop attendance so that you really understand why the application was built like it's built and that you fully understand the guidance that the application implicitly and explicitly carries (and doesn't carry).

Categories: SOA | Web Services

October 26, 2004
@ 12:50 PM

Below are two SOAP messages that are only subtly different when you look at the XML text, but the way how they “want to be treated” at the endpoint differs quite dramatically. The first targets a data-item/record/object and triggers a method, while the second targets an interface/endpoint/API and triggers a function/procedure.

The first message carries an out-of-band reference that is in the header, the second has that same reference inside the body. The first is a bit like how the implicit “this pointer” argument is passed “invisibly” to a C++ or C# method, the second is like passing an explicit context argument in C or (classic) Pascal or any other procedural language. The first binds to logic belonging to a specific object, the second binds to some object-neutral handling logic.

 

 

[1]
<soap:Envelope xmlns:soap=”http://www.w3.org/2003/05/soap-envelope”
               xmlns:wsa=”http://schemas.xmlsoap.org/ws/2004/08/addressing”>
     <soap:Header>
           <wsa:To>http://www.example.org/Giro/Transfer</wsa:To>
           <my:Account xmlns:my=”http://schemas.newtelligence.com/2004/10/MyBank”>262616161</my:Account>
           <wsa:Action>http://schemas.newtelligence.com/2004/10/MyBank/Giro/Transfer</wsa:Action>
            …
     </soap:Header>
     <soap:Body>
            <my:Transfer xmlns:my=”http://schemas.newtelligence.com/2004/10/MyBank”>
               <my:TransferDestination>
                   <my:AccountNo>99999999999</my:AccountNo>
                   <my:Recipient>Peter Sample</my:Recipient>
                   <my:RoutingCode codeType=”DE-BLZ”>00000000</my:RoutingCode>
                   <my:Destination>Sample Bank</my:Destination>   
               </my:TransferDestination>
               <my:Amount>100.78</my:Amount>
               <my:Currency>EUR</my:Currency>
               <my:TransferDate>2004-10-27</my:TransferDate>
               <my:ValueDate>2004-10-27</my:ValueDate>
            <my:Transfer>
     </soap:Body>
</soap:Envelope>


[2]
<soap:Envelope xmlns:soap=”http://www.w3.org/2003/05/soap-envelope”
               xmlns:wsa=”http://schemas.xmlsoap.org/ws/2004/08/addressing” >
     <soap:Header>
            <wsa:To>http://www.example.org/Giro/Transfer</wsa:To>
            <wsa:Action>http://schemas.newtelligence.com/2004/10/MyBank/Giro/Transfer</wsa:Action>
            …
     </soap:Header>
     <soap:Body>
            <my:Transfer xmlns:my=”http://schemas.newtelligence.com/2004/10/MyBank”>
               <my:Account>262616161</my:Account>
               <my:TransferDestination>
                   <my:AccountNo>99999999999</my:AccountNo>
                   <my:Recipient>Peter Sample</my:Recipient>
                   <my:RoutingCode codeType=”DE-BLZ”>00000000</my:RoutingCode>
                   <my:Destination>Sample Bank</my:Destination>   
               </my:TransferDestination>
               <my:Amount>100.78</my:Amount>
               <my:Currency>EUR</my:Currency>
               <my:TransferDate>2004-10-27</my:TransferDate>
               <my:ValueDate>2004-10-27</my:ValueDate>
            <my:Transfer>
     </soap:Body>
</soap:Envelope>

 

 

A possible endpoint reference (“object pointer” in OOldspeak) for the message target for [1] is

 

<wsa:EndpointReference xmlns:wsa=”http://schemas.xmlsoap.org/ws/2004/08/addressing” >
    <wsa:Address>http://www.example.org/Giro/Transfer</wsa:Address>

    <wsa:ReferenceParameters>
         <my:Account xmlns:my=”http://schemas.newtelligence.com/2004/10/MyBank”>262616161</my:Account>
    <wsa:ReferenceParameters> 

    ...
<wsa:EndpointReference>


A possible endpoint reference for [2] is

 

<wsa:EndpointReference xmlns:wsa=”http://schemas.xmlsoap.org/ws/2004/08/addressing” >
    
<wsa:Address>http://www.example.org/Giro/Transfer</wsa:Address>

<wsa:EndpointReference>

 

I am sure it’s boring to everybody else, but I find it quite funny how WS-Addressing turns out to be the “Object Access Protocol” for SOAP ;-)

Categories: Web Services

Perspective 1:

I am not a fan of the way WS-Eventing is relying on WS-Addressing's EPR binding rules to correlate subscription responses (containing a parameterized "wse:SubscriptionManager" EPR holding the subscription identifier in wsa:ReferenceParameters, see table 5, line 24-26 in WS-Eventing) with subsequent renewals, unsubscribe and status inquiry operations and forces the most central information bit for the respective operation ("Which subscription do you wish to renew?") into the header of the respective message (wse:Identifier) instead of having it in the body.

So for the "GetStatus" operation, you end up with a single body element "GetStatus" that doesn't carry any content [see table 8, lines 19-21 and line 24 in WS-Eventing]. I know that XML lost everything intuitive about it a long time back, but the way this works is really counterintuitive and just doesn't look right. I would expect <GetStatus>uri:Subscription-Identifier</GetStatus>.

Perspective 2:

Now on the other hand! this is actually a sound approach if I were looking at the said SubscriptionManager EPR as a resource. What the wsa:ReferenceParameter does to the EPR is that it binds it uniquely to the respective subscription (section 1.3 of the spec mandates this).

What's confusing here (and not really a good name choice in my view) is that the wse:SubscriptionManager EPR does NOT only point to the subscription manager endpoint, but rather binds all the way through to a specific subscription. Once the binding process to that subscription is done, the requsted action is then executed on the bound resource.

Ok ... I am sorry ... too abstract? I'll rephrase.

What I am saying is that WS-Eventing is an example showing how messages aren't necessarily targeted at the thing with [WebMethod] on top of it, but they may indeed be targeted at something more specific like a database record. So the binding of the endpoint reference is not a matter of the client stopping at http://somewhere/blahblah.asmx but is only complete when the wse:Identifier header is evaluated on the service-side and inside blahblah.asmx, resolved against the subscription database and the actual message target, the respective subscription record, is found. Once the EPR is fully resolved to yield the message target (the record), <GetStatus/> indeed becomes a parameterless operation and the body does not have to carry further content.

EPR = Moniker ;-)

Categories: Web Services

Whenever you start thinking stuff is stable, it turns out that it is not the case. I am trying to implement a WS-Eventing compliant service and of course I ran into the issue that that specification sits on the August 2004 edition (and W3C submission) of WS-Addressing while WSE 2.0 sits on the March 2004 edition of WS-Addressing. To implement WS-Eventing correctly, I would now have to write a WS-Adressing implementation parallel to the one existing in WSE 2.0, because - of course - the August 2004 edition sports a new namespace and has a subtly different schema. Unfortunately, the March 2004 edition of WS-Addressing  is so fundamental for WSE 2.0 that routing and security and everything would sit on the March version while my own eventing functionality and nothing else would ride on the August version at the same time and in the same message. Of course that seems just totally wrong.  WSE 2.1, please!

Categories: Web Services

Unless you enable the config setting below, WSE injects intentionally invalid “Via” routing information into ReplyTo and FaultTo addresses for security reasons and therefore you can’t just turn around and create, for instance, a new SoapSender(SoapRequestContext.Address.ReplyTo) at the receiving endpoint or set the reply envelope’s context like envelope.Context.Addressing.Destination = SoapRequestContext.Address.ReplyTo. Because “Via” trumps any other address in an endpoint reference for delivery, a reply to such an invalidated EPR will usually yield a 404. I fell into that hole for the second or third time now and it somehow never stuck in long-term memory, so this is the persisted “note to self”  ;-)

<microsoft.web.services2>
   <messaging>
      <allowRedirectedResponses enabled="true" />
   </messaging>
</microsoft.web.services2>

Categories: Web Services

July 12, 2004
@ 01:46 PM

I might be blind to not have seen that before, but this here hit me over the third Guinness at an Irish Pub while answering a sudden technical question from my buddy Bart:

<wsa:ReplyTo xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/03/addressing">
 <wsa:Address>http://server/service_second_intermediary</wsa:Address>
 <wsa:ReferenceProperties>
  <wsa:ReplyTo>
   <wsa:Address>http://server/service_first_intermediary</wsa:Address>
   <wsa:ReferenceProperties>
    <wsa:ReplyTo>
     <wsa:Address>http://server/service_outer_caller</wsa:Address>
    </wsa:ReplyTo>
   </wsa:ReferenceProperties>
  </wsa:ReplyTo>
 </wsa:ReferenceProperties>
</wsa:ReplyTo>

Read the EPR binding rules section 2.3 in the WS-Addressing spec and you'll find out just like me how distributed "call-stacks" work with WS-Addressing, if your choice of communication pattern is the far more flexible duplex (or here) pattern for datagram-based message conversations instead of the rather simplistic request/response model. Of course, any endpoint-reference can be stacked in the same way. I always wondered where the (deprecated) WS-Routing "path" went, which allowed specifying source routes. I think I ran into it.

Categories: Web Services

We've built FABRIQ, we've built Proseware. We have written seminar series about Web Services Best Practices and Service Orientation for Microsoft Europe. I speak about services and aspects of services at conferences around the world. And at all events where I talk about Services, I keep hearing the same question: "Enough of the theory, how do I do it?"

Therefore we have announced a seminar/workshop around designing and building service oriented systems that puts together all the things we've found out in the past years about how services can be built today and on today's Microsoft technology stack and how your systems can be designed for with migration to the next generation Microsoft technlogy stack in mind. Together with our newtelligence Associates, we are offering this workshop for in-house delivery at client sites world-wide and are planning to announce dates and locations for central, "open for all" events soon.

If you are interested in inviting us for an event at your site, contact Bart DePetrillo, or write to sales@newtelligence.com. If you are interested in participating at a central seminar, Bart would like to hear about it (no obligations) so that we can select reasonable location(s) and date(s) that fit your needs.

Categories: Architecture | SOA | FABRIQ | Indigo | Web Services

I am writing a very, very, very big application at the moment and I am totally swamped in a 24/7 coding frenzy that’s going to continue for the next week or so, but here’s one little bit to think about and for which I came up with a solution. It’s actually a pretty scary problem.

Let’s say you have a transactional serviced component (or make that a transactional EJB) and you call an HTTP web service from it that forwards any information to another service. What happens if the transaction fails for any reason? You’ve just produced a phantom record. The web service on the other end should never have seen that information. In fact, that information doesn’t exist from the viewpoint of your rolled back local transaction. And of course, as of yet, there is no infrastructure in place that gives you interoperable transaction flow. And if that were the case, the other web service may not support it. What should you do? Panic?

There is help right in the platform (Enterprise Services that is). Your best friend for that sort of circumstance is System.EnterpriseServices.CompensatingResourceManager.

The use case here is to call another service to allocate some items from an inventory service. The call is strictly asynchronous and I the remote service will eventually turn around and call an action on my service (they have a “duplex” conversation using asynchronous calls going back and forth). Instead of calling the service form within my transactional method, I am deferring the call until the transaction is being resolved. Only when DTC is sure that the local transaction will go through, the web service call will be made. There is no way to guarantee that the remote call succeeds, but it does at least eliminate the very horrible side effects on overall system consistency caused by phantom calls. It is in fact quite impossible to implement “Prepare” correctly here, since the remote service may fail processing the (one-way) call on a different thread and hence I might never get a SOAP fault indicating failure. Because that’s so and because I really don’t know what the other service does, I am not writing any specific recovery code in the “Commit” phase. Instead, my local state for the conversation indicates the current progress of the interaction between the two services and logs an expiration time. Once that expiration time has passed without a response from the remote service, a watchdog will pick up the state record, create a new message for the remote service and replay the call.

For synchronous call scenarios, you could implement (not shown here) a two-step call sequence to the remote service, which the remote service needs to support, of course. In “Prepare” (or in the “normal code”) you would pass the data to the remote service and hold a session state cookie. If that call succeeds, you vote “true”. In “Commit” you would issue a call to commit that data on the remote service for this session, on “Abort” (remember that the transaction may fail for any reason outside the scope of the web service call), you will call the remote service to cancel the action and discard the data of the session. What if the network connection fails between the “Prepare” phase call and the “Commit” phase call? That’s the tricky bit. You could log the call data and retry the “Commit” call at a later time or keep retrying for a while in the “Commit” phase (which will cause the transaction to hang). There’s no really good solution for that case, unless you have transaction flow. In any event, the remote service will have to default to an “Abort” once the session times out, which is easy to do if the data is kept in a volatile session store over there. It just “forgets” it.

However, all of this is much, much better than making naïve, simple web service calls that fan out intermediate data from within transactions. Fight the phantoms.

At the call location, write the call data to the CRM transaction log using the Clerk:

AllocatedItemsMessage aim = new AllocatedItemsMessage();
aim.allocatedAllocation = <<< copy that data from elsewhere>>>
Clerk clerk = new Clerk(typeof(SiteInventoryConfirmAllocationRM),"SiteInventoryConfirmAllocationRM",CompensatorOptions.AllPhases);
SiteInventoryConfirmAllocationRM.ConfirmAllocationLogRecord rec = new RhineBooks.ClusterInventoryService.SiteInventoryConfirmAllocationRM.ConfirmAllocationLogRecord();
rec.allocatedItemsMessage = aim;
clerk.WriteLogRecord( rec.XmlSerialize() );
clerk.ForceLog();

Write a compensator that picks up the call data from the log and forwards it to the remote service. In the “Prepare” phase, the minimum work that can be done is to check whether the proxy can be constructed. You could also make sure that the call URL is valid, the server name resolves and you could even try a GET on the service’s documentation page or call a “Ping” method the remote service may provide. That all serves to verify as good as you can that the “Commit” call has a good chance of succeeding:


using System.EnterpriseServices.CompensatingResourceManager;
using …

 

///


/// This class is a CRM compensator that will invoke the allocation confirmation
/// activity on the site inventory service if, and only if, the local transaction
/// enlisting it is succeeding. Using the technique is a workaround for the lack
/// of transactional I/O with HTTP web services. While the compensator cannot make
/// sure that the call will succeed, it can at least guarantee that we do not produce
/// phantom calls to external services.
///

public class SiteInventoryConfirmAllocationRM : Compensator
{
  private bool vote = true;

  [Serializable]
  public class ConfirmAllocationLogRecord
  {
    public SiteInventoryInquiries.AllocatedItemsMessage allocatedItemsMessage;           

    internal string XmlSerialize()
    {
      StringWriter sw = new StringWriter();
      XmlSerializer xs = new XmlSerializer(typeof(ConfirmAllocationLogRecord));
      xs.Serialize(sw,this);
      sw.Flush();
      return sw.ToString();
    }

    internal static ConfirmAllocationLogRecord XmlDeserialize(string s)
    {
      StringReader sr = new StringReader(s);
      XmlSerializer xs = new XmlSerializer(typeof(ConfirmAllocationLogRecord));
      return xs.Deserialize(sr) as ConfirmAllocationLogRecord;
    }
  }

  public override bool PrepareRecord(LogRecord rec)
  {
    try
    {
      SiteInventoryInquiriesWse sii;
      ConfirmAllocationLogRecord calr  = ConfirmAllocationLogRecord.XmlDeserialize((string)rec.Record);
      sii = InventoryInquiriesInternal.GetSiteInventoryInquiries( calr.allocatedItemsMessage.allocatedAllocation.warehouseName );
      vote = sii != null;    
      return false;
    }
    catch( Exception ex )
    {
      ExceptionManager.Publish( ex );
      vote = false;
      return true;
    }
  }

  public override bool EndPrepare()
  {
    return vote;
  }


  public override bool CommitRecord(LogRecord rec)
  {
    SiteInventoryInquiriesWse sii;
    ConfirmAllocationLogRecord calr  = ConfirmAllocationLogRecord.XmlDeserialize((string)rec.Record);
    sii = InventoryInquiriesInternal.GetSiteInventoryInquiries( calr.allocatedItemsMessage.allocatedAllocation.warehouseName );
 
    try
    {
      sii.ConfirmAllocation( calr.allocatedItemsMessage );
    }
    catch( Exception ex )
    {
      ExceptionManager.Publish( ex );
    }
    return true;
  }
}

 

 

One year ago (plus 5 days), I posted this here on my blog. I just found it again through my referral stats. Of course, that post isn't about Juliet, at all. Fun.

Categories: Indigo | Web Services

The evolution of in-memory concept of messages in the managed Microsoft Web Services stack(s) is quite interesting to look at. When you compare the concepts of System.Web.Services (ASMX), Microsoft.Web.Services (WSE) and System.MessageBus (Indigo M4), you'll find that this most fundamental element has undergone some interesting changes and that the Indigo M4 incarnation of "Message" is actually a bit surprising in its design.

ASMX

In the core ASP.NET Web Services model (nicknamed ASMX), the concept of an in-memory message doesn't really surface anywhere in the programming model unless you use the ASMX extensibility mechanism. The abstract SoapMessage class, which comes in concrete SoapClientMessage and SoapServerMessage flavors has two fundamental states that depend on the message stage that the message is inspected in: The message is either unparsed or parsed (some say "cracked").

If it's parsed you can get at the parameters that are being passed to the server or are about to be returned to the client, but the original XML data stream of the message is no longer available and all headers have likewise either been mapped onto objects or lumped into a "unknown headers" array. if the message is unparsed, all you get is an text stream that you'll have to parse yourself. If you want to add, remove or modify headers while processing a message in an extension, you will have to read and parse your copy of the input stream (the message text) and write the resulting mesage to an output stream that's handed onwards to the next extension or to the infrastructure. In essence that means that if you had two or three ASMX-style SOAP extensions that implement security, addressing and routing functionality, you'd be parsing the message three times and serializing it three times just so that the infrastructure would parse it yet again. Not so good.

WSE

The Web Services Enhancements (WSE) have a simple, but very effective fix for that problem. The WSE team needed to use the ASMX extensibility point but found that if they'd build all their required extensions using the ASMX model, they'd run into that obvious performance problem. Therefore, WSE has its own pipeline and its own extensibility mechanism that plugs as one big extension into ASMX and when you write extensions (handlers) for WSE, you don't get a stream but an in-memory info-set in form of a SoapEnvelope (that is derived from System.Xml.XmlDocument and therefore a DOM). Parsing the XML text just once and have all processing steps work on a shared in-memory object-model seems optimal. Can it really get any better than "parse once" as WSE does it?

Indigo

When you look at the Indigo concept of Message (the Message class in the next milestone will be the same in spirit, similar in concept and different in detail and simpler as a result), you'll find that it doesn't contain a reference to an XmlDocument or some other DOM-like structure. The Indigo message contains a collection of headers (which in the M4 milestone also come in an "in-memory only" flavor) and a content object, which has, as its most important member, an XmlReader-typed Reader property.

When I learned about this design decision a while ago, I was a bit puzzled why that's so. It appeared clear to me that if you kept the message parsed in a DOM, you'd have a good solution if you want to hand the message down a chain of extensibility points, because you don't need to reparse. The magic sentence that woke me up was "We need to support streaming". And then it clicked.

Assume you want to receive a 1GB video stream over an Indigo TCP multicast or UDP connection (even if you think that's a silly idea - work with me here). Because Indigo will represent the message containing that video as an XML Infoset (mind that this doesn't imply that we're talking about base64-encoded content in an UTF-8 angle bracket document and therefore 2GB on the wire), we've got some problems if there was a DOM based solution. A DOM like XmlDocument is only ready for business when it has seen the end tag of its source stream. This is not so good for streams of that size, because you surely would want to see the video stream as it downloads and, if the video stream is a live broadcast, there may simply be no defined end: The message may have a virtually infinite size with the "end-tag" being expected just shortly before judgment day.

There's something philosophically interesting about a message relaying a 24*7*365 video stream where the binary content inside the message body starts with the current video broadcast bits as of the time the message is generated and then never ends. The message can indeed be treated as being well-formed XML because there is always a theoretical end to it. The end-tag just happens to be a couple of "bit-years" away.

Back to the message design: When Indigo gets its hands on a transport stream it layers a Message object over the raw bits available on the message using an XmlReader. Then it peeks into the message and parses soap:Envelope and everything inside soap:Header. The headers it finds go into the in-memory header collection. Once it sees soap:Body, Indigo stops and backs off. The result of this is a partially parsed in-memory message for which all headers are available in memory and the body of the message is left sitting in an XmlReader. When the XmlReader sits on top of a NetworkStream, we now have a construct where Indigo can already work on the message and its control information (headers) while the network socket is still open and the rest of the message is still arriving (or portions haven't even been sent by the other party).

Unless an infrastructure extension must touch the body (in-message body encryption or signature do indeed spoil the party here), Indigo can process the message, just ignore the body portion and hand it to the application endpoint for processing as-is. When the application endpoint reads the message through the XmlReader it therefore pulls the bits directly off the wire. Another variant of this, and the case where it really gets interesting, is that using this technique, arbitrary large data streams can be routed over multiple Indigo hops using virtualized WS-Addressing addressing where every intermediary server just forwards the bits to the next hop as they arrive. Combine this with publish and subscribe services and Indigo's broadcasting abilities and this is getting really sexy for all sorts of applications that need to traverse transport-level obstacles such as firewalls or where you simply can't use IP.     

For business applications, this support for very large messages is not only very interesting but actually vital for a lot of applications. In our BizTalk workshops we've had quite a few customers who exchange catalogs for engineering parts with other parties. These catalogs easily exceed 1GB in size on the wire. If you want to expand those messages up into a DOM you've got a problem. Consequently, neither WSE nor ASMX nor BizTalk Server nor any other DOM based solution that isn't running on a well equipped 64-bit box can successfully handle such real-customer-scenario messages. Once messages support streaming, you have that sort of flexibility.

The problem that remains with XmlReader is that once you touch the body, things get a bit more complex than with a DOM representation. The XmlReader is a "read once" construct that usually can't be reset to its initial state. That is specifically true if the reader sits on top of a network stream and returns the translated bits as they arrive. Once you touch the message content is the infrastructure, the message is therefore "consumed" and can't be used for further processing. The good news is, though, that if you buffer the message content into a DOM, you can layer an XmlNodeReader over the DOM's document element and forward the message with that reader. If you only need to read parts of the message or if you don't want to use the DOM, you can layer a custom XML reader over a combination of your buffer data and the original XmlReader.

Categories: Technology | Indigo | Web Services

November 30, 2003
@ 06:14 PM

I'll put together the v1.5 build version of dasBlog next week. The v1.4 "PDC build" proved to be "true to the spirit of PDC bits" and turned out to have a couple of problems with the new "dasBlog" theme and some other inconveniences that v1.5 will fix. The true heroes of v1.5 are Omar and the many other frequent contributors to the workspace; I just didn't have enough time to add features recently.

As I blogged last week, I am very busily involved in a exciting (mind that I use the word not as carelessly as some marketing types) infrastructure project on service-oriented architectures, automnomous computing an agile machines. I wrote some 50 pages of very dense technical specification and a lot of "proof of concept" code in the past two weeks and we're in the process of handing this off to the development team. I am having a great time and a lot of fun, but because the schedule is insanely tight for a variety of reasons (I am not complaining, I signed it knowingly), I've been on 16 hour days for most of the past two weeks.  In some ways, this is also an Indigo project, because I am loosely aligning some of my core architecture with a few fundamentals from the Indigo connector architecture published at PDC to that we can take full advantage of Indigo once it's ready. The Indigo idea of keeping the Message body in an XmlReader is an ingenious idea for what I am doing here. In essence, if you only need to look at the headers inside an intermediary in a one-way messaging infrastructure like the one I am building right now, you may never even need to look anything from the body until you push the resulting message out again. So why suck it into a DOM? Just map the input stream to the output stream and hand the body through as you get it. That way and under certain circumstances, my bits may already be forwarding a message to the next hop when it hasn't even fully arrived yet.

One of the "innovative approaches" (for me, at least) is that within this infrastructure, which has a freely composable, nestable pipeline of "aspects", I am using my lightweight transaction manager to coordinate the failure management of such independently developed components. The difficulty of that and the absence of an "atomic" property of a composite pipeline activity are two things that bugged me most about aspects. There's a lot more potential in this approach, for instance enforcement of composition rules. It works great in theory and in the prototype code and I am curious how that turns out once it hits a real life use-case. We're getting there soon. (My first loud thinking about something like this is was at the very bottom of this rant here.) I'll keep you posted.

In unrelated news: Because I know that I'll be doing a lot of Longhorn work and demos in the upcoming months (my Jan/Feb/Mar schedule looks like I am going to visit every EMEA software developer personally), I've meanwhile figured that my loyal and reliable digital comrade (a Dell Inspiron 8100) will be retired. Its successor will have a green chassis.

Categories: dasBlog | Indigo | Web Services

Jon Udell writes in his most recent column that some think that there is a "controversy" about the use of XML namespaces. This seems to stem from the sad fact that RSS never got a proper namespace assigned to it and is one of the hottest schemas specs in the XML space right now. Sorry, there may people in disbelief, but the XML Namespaces spec is normative and referenced in the current XML 1.0 (Second Edition) spec. The empty namespace is a namespace.

Some notable experts — including Sean McGrath, CTO of Propylon in Dublin, Ireland — argue that namespaces should be avoided for that reason.

You can't avoid namespaces, they are automatic if you use XML today. If you don't declare one for your vocabulary/schema, you are contributing to a large cloud of "stuff" sitting in the "not part of any" namespace. The "empty" namespace (which essentially says "not part of any namespace") is the XML equivalent of a the "these are just some tags" garbage dump.

Categories: Web Services | Technology | XML

I couldn't find one, so I made a WS-PolicyAttachment UDDI bootstrap file for import into Windows UDDI Services.

When I put that together, I ran into a bug in the spec. Point 5.1 shows the tModel for the remote policy reference. The tModelKey shown there is

<tModel tModelKey="uuid:0b1b5a47-bebf-3b7d-9802-f2dd80a91adebd3966a8-faa5-416e-9772-128554343571">

which is a bit long for a uuid, isn't it? Correct is the following (as the spec later explains):

<tModel tModelKey="uuid:0b1b5a47-bebf-3b7d-9802-f2dd80a91ade">

The bug even survived the revision from 1.0 to 1.1, which makes me wonder whether anyone ever reads these specs in any depth

Categories: Web Services | Technology | UDDI

H2/2003, moving up one notch on the WS stack.

Yesterday, all the travel madness of H1/2003 which begun in January has officially ended. I have a couple of weeks at the office ahead of me and that's, even if it may sound odd, a fantastic thing. The first half of the year and quite a bit of last year too, I spent most of my research time working deep down in the public and not-so-public extensibility points of Enterprise Services and Web Services, trying to understand the exact details of how they work, figuring out how to inject more and tweak existing functionality and whether certain development patterns such as AOP could enhance the development experience and productivity of my clients (and all of you out there who are reading my blog). I've been in 21 countries in this first half of the year alone and at about 40 different events, talking about what I found working with these technologies on some more and some less serious projects and doing that and speaking to people I learned a lot and I also think that I helped to inspire quite a few people's thinking.

Now it's time to move on and focus on the bigger picture. Starting with version 2.0 of Microsoft Web Service Enhancements that's due out by end of this summer, Web Services will finally become less Web and more Services. The WSE 2.0 stack will break the tie between HTTP and SOAP by enabling other transports and they'll add support for some of the most important WS-* specs such as WS-Policy, WS-Addressing and related specs. The now released UDDI services in Windows Server 2003 put a serious local UDDI registry at my fingertips. BizTalk Server 2004's new orchestration engine looks awesome. There's a lot of talk about Service Oriented Architectures, but too less to see and touch for everyone to believe that this stuff is real. I think that's a good job description for H2/2003. My UDDI provider key: 7f0baedf-3f0d-4de1-b5e7-c35f668964d5

Categories: Web Services | Technology | UDDI