Arvindra blogged it first and I'll add the immediate link to the video stream (because the regular links don't work on my machine for some strange reason)

Categories: TechEd Europe

July 24, 2004
@ 03:21 PM

Daniel Fisher aka "Lenny Bacon" joined the pack and is already having some fun.

Categories: newtelligence

Microsoft Watch highlights the recently surfaced HP memo that speculates that Microsoft would start enforcing its patent portfolio on Open Source. How likely is it? It is an interesting question, indeed. Here’s what I think:

The patent situation, especially on the middleware market, used to be very much like the cold war between the USSR and the USA in the last century. One side moves, everyone dies. My guess is that if Microsoft had gone out and dragged Sun to court over J2EE and Sun had countersued over .NET, things would have gotten really, really nasty. The very foundations of the J2EE stack are sitting right in the middle of a substantial Microsoft patent minefield covering what we know as MTS and COM+. The reverse doesn’t look much better. Now Sun and Microsoft made peace on that front and are even looking to negotiate a broad IP cross-licensing deal to end that sort of arms race. Cross-licensing of patents is quite common in this industry and in most other industries as well. So where does that leave the grassroots Open Source movement? Not in a good place, for sure.

If you do research and you pour millions or even billions into that research, there has to be some return on that investment. And there is a difference between academic research and research that yields commercial products. I am not saying that there is no close relationship between the two, but both are done with different goals. If you do research for commercial purposes, regardless of whether you do it in the automotive industry or in the pharmaceutical industry or even in the software industry, the results of your research deserve protection. At the same time, it is harmful to the society at-large, if everyone would keep all results of all research under wraps. So governments offer companies a deal: You disclose the results of your research and we grant you a limited-time monopoly to use that technology exclusively. If you decide to share the technology with other parties, you can be forced to allow third parties to license it on appropriate terms. And the German Patent law §11 (1) , for instance, explicitly states that patents do not cover noncommercial use of technology in a private environment.

Now, if states offer that sort of system, a company that is built almost only on intellectual property (like Sun, IBM, Oracle, Apple, Microsoft and so forth) must play with the system. The must file for patents. If they don’t, they end of with something like the Eolas mess in their hands and that is not pretty. Even if some of the patents seem absolutely ridiculous; if the patent lawyers at a large company figure out that a certain technology is not covered by an existing patent, they must go and protect it. Not necessarily to enforce it, but rather to avoid that someone else enforces it on them. And because a lot of these patents are indeed idiotic, such are rarely ever enforced and most often quite liberally licensed. Something similar is true for trademarks. Microsoft has no choice but to chase Lindows (now Linspire) or even poor “Mike Rowe Soft” because they must defend their trademarks, by law. If they don’t and let a case slip, they might lose them. It’s not about being nasty, it’s just following the rules that lawmakers have set.

Now, if someone starts cheating on the research front and consumes IP from that system but never contributes IP to the system, it does indeed change the ballgame. If you don’t have a patent portfolio that is interesting enough for someone else to enter a (possibly scoped) cross-licensing deal with you and you don’t license such patents for money but instead break the other parties’ rightfully (it’s the law!) acquired, time-limited monopolies on commercial use of the respective technologies and you do so for profit, then you are simply violating the rules of the law. That’s as simple as it is. So, if I held Sun’s or Microsoft’s patent portfolio, would I ask those who profit from commercialization of those patents for my share? I really might give it some serious consideration. I think companies like Red Hat make wonderful targets, because they are commercial entities that profit greatly from a lot of IP that they do not (as I suspect) have properly licensed for commercial exploitation. The interesting this is that my reading of the (German) patent law is that the non-profit Apache Foundation can actually use patented technology without being at risk, but a for-profit company cannot adopt their results without being liable to acquire a license. Even giving away “free” software in order to benefit from the support services is commercialization. So if Red Hat includes some Apache project’s code that steps on patents, I’d say they are in trouble.

Now, if someone were to “reimplement” a patented drug, the pharmaceutical company sitting on the patent would sue them out of existence the next second without even blinking. Unless I am really badly informed, the entire biotech industry is entirely built on IP protection. All these small biotech firms are doing research that eventually yields protected IP and that’s what they look to turn into profit. They’re not in the business of producing and distributing the resulting drugs on a world-wide scale, they look to share the wealth with the pharmaceutical giants that have the respective infrastructure. The software industry is a very, very tame place against what’s going on in other industries. So will Sun, IBM, Oracle, Apple, and/or Microsoft eventually become more serious about drawing profit from the rights they hold? Right now it would be a very, very stupid thing to do in terms of the resulting, adverse marketing effect.

Now imagine Sun’s unfortunate decline keeps going or some other technology company with a substantial patent portfolio (and not some weak copyright claims) falls into the hands of a litigious bunch of folks as in the case of SCO.  That’s when the shit is going to hit the fan. Big time.

Categories: IT Strategy

Unless you enable the config setting below, WSE injects intentionally invalid “Via” routing information into ReplyTo and FaultTo addresses for security reasons and therefore you can’t just turn around and create, for instance, a new SoapSender(SoapRequestContext.Address.ReplyTo) at the receiving endpoint or set the reply envelope’s context like envelope.Context.Addressing.Destination = SoapRequestContext.Address.ReplyTo. Because “Via” trumps any other address in an endpoint reference for delivery, a reply to such an invalidated EPR will usually yield a 404. I fell into that hole for the second or third time now and it somehow never stuck in long-term memory, so this is the persisted “note to self”  ;-)

      <allowRedirectedResponses enabled="true" />

Categories: Web Services

I was a little off when I compared my problem here to a tail call. Gordon Weakliem corrected me with the term "continuation".

The fact that the post got 28 comments shows that this seems to be an interesting problem and, naming aside, it is indeed a tricky thing to implement in a framework when the programming language you use (C# in my case) doesn't support the construct. What's specifically tricky about the concrete case that I have is that I don't know where I am yielding control to at the time when I make the respective call.

I'll recap. Assume there is the following call

CustomerService cs = new CustomerService();

FindCustomer is a call that will not return any result as a return value. Instead, the invoked service comes back into the caller's program at some completely different place such this:

public void
FindCustomerReply(Customer[] result)

So what we have here is a "duplex" conversation. The result of an operation initiated by an outbound message (call) is received, some time later, through an inbound message (call), but not on the same thread and not on the same "object". You could say that this is a callback, but that's not precisely what it is, because a "callback" usually happens while the initiating call (as above FindCustomer) has not yet returned back to its scope or at least while the initiating object (or an object passed by some sort of reference) is still alive. Here, instead, processing of the FindCustomer call may take a while and the initiating thread and the initiating object may be long gone when the answer is ready.

Now, the additional issue I have is that at the time when the FindCustomer call is made, it is not known what "FindCustomerReply" message handler it going to be processing the result and it is really not know what's happening next. The decision about what happens next and which handler is chosen is dependent on several factors, including the time that it takes to receive the result. If the FindCustomer is called from a web-page and the service providing FindCustomer drops a result at the caller's doorstep within 2-3 seconds [1], the FindCustomerReply handler can go and hijack the initial call's thread (and HTTP context) and render a page showing the result. If the reply takes longer, the web-page (the caller) may lose its patience [2] and choose to continue by rendering a page that says "We are sending the result to your email account." and the message handler with not throw HTML into an HTTP response on an open socket, but rather render it to an email and send it via SMTP and maybe even alert the user through his/her Instant Messenger when/if the result arrives.

[1] HTTP Request => FindCustomer() =?> "FindCustomerReply" => yield to CustomerList.aspx => HTTP Response
[2] HTTP Request => FindCustomer() =?> Timeout!            => yield to YouWillGetMail.aspx => HTTP Response
                               T+n =?> "FindCustomerReply" => SMTP Mail
                                                           => IM Notification

So, in case [1] I need to correlate the reply with the request and continue processing on the original thread. In case [2], the original thread continues on a "default path" without an available reply and the reply is processed on (possibly two) independent threads and using two different notification channels.

A slightly different angle. Consider a workflow application environment in a bank, where users are assigned tasks and simply fetch the next thing from the to-do list (by clicking a link in an HTML-rendered list). The reply that results from "LookupAndDoNextTask" is a message that contains the job that the user is supposed to do.  

[1] HTTP Request => LookupAndDoNextTask() =?> Job: "Call Customer" => yield to CallCustomer.aspx => HTTP Response
[2] HTTP Request => LookupAndDoNextTask() =?> Job: "Review Credit Offer" => yield to ReviewCredit.aspx => HTTP Response
[3] HTTP Request => LookupAndDoNextTask() =?> Job: "Approve Mortgage" => yield to ApproveMortgage.aspx => HTTP Response
[4] HTTP Request => LookupAndDoNextTask() =?> No Job / Timeout => yield to Solitaire.aspx => HTTP Response

In all of these cases, calls to "FindCustomer()" and "LookupAndDoTask()" that are made from the code that deals with the incoming request will (at least in the theoretical model) never return to their caller and the thread will continue to execute in a different context that is "TBD" at the time of the call. By the time the call stack is unwound and the initiating call (like FindCustomer) indeed returns, the request is therefore fully processed and the caller may not perform any further actions. 

So the issue at hand is to make that fact clear in the programming model. In ASP.NET, there is a single construct called "Server.Transfer()" for that sort of continuation, but it's very specific to ASP.NET and requires that the caller knows where you want to yield control to. In the case I have here, the caller knows that it is surrendering the thread to some other handler, but it doesn't know to to whom, because this is dynamically determined by the underlying frameworks. All that's visible and should be visible in the code is a "normal" method call.

cs.FindCustomer(customerId) might therefore not be a good name, because it looks "too normal". And of course I don't have the powers to invent a new statement for the C# language like continue(cs.FindCustomer(customerId)) that would result in a continuation that simply doesn't return to the call location. Since I can't do that, there has to be a different way to flag it. Sure, I could put an attribute on the method, but Intellisense wouldn't show that, would it? So it seems the best way is to have a convention of prefixing the method name.

There were a bunch of ideas in the comments for method-name prefixes. Here is a selection:

  • cs.InitiateFindCustomer(customerId)
  • cs.YieldFindCustomer(customerId)
  • cs.YieldToFindCustomer(customerId)
  • cs.InjectFindCustomer(customerId)
  • cs.PlaceRequestFindCustomer(customerId)
  • cs.PostRequestFindCustomer(customerId)

I've got most of the underlying correlation and dispatch infrastructure sitting here, but finding a good programming model for that sort of behavior is quite difficult.

[Of course, this post won't make it on Microsoft Watch, eWeek or The Register]

Categories: Architecture | SOA | Technology | ASP.NET | CLR

July 19, 2004
@ 02:42 PM

News is what is made news.

Point in case: This sentence on my blog here: "There's apparently a related project Boa (another serpent name along the family line of Viper that was the original codename for MTS), including the business markup language BML (pronounced "Bimmel") that he's involved in and he talked a bit about that, but of course I'd be killed if I gave out more details." now prompts, directly or indirectly, this here on Microsoft Watch and this on eWeek.

Nobody said that the project was software in product development. Nobody said it was about stuff that would eventually ship. Nobody really said anything that would be in any way relevant to technical or business decision makers today. What this shows is that there's a bit too much appetite for the next big thing while we're all still working on making the current big thing happen. Do you seriously think I am someone who'd casually leak Microsoft trade secrets on his blog?

And.... seriously.... go back and read the first six sentences on that entry with your brain switched into "active mode".

Categories: Blog | Other Stuff

July 19, 2004
@ 07:07 AM

The recording of last week's .NET Rocks show on which I explained my view on the "services mindset" (at 4AM in the morning) is now available for download from

Categories: SOA

July 16, 2004
@ 11:49 AM

AAMOF, BOA along with BML are PBS TLA's created when we were TUI and MSU while having a late dinner. TMA in this industry and way too much fuzzy MBS. TWOP! IAR, giving out SSI under NDA would get me into VDS, get me?.

(You can speculate all you want in the comments section).

Categories: Other Stuff

newtelligence AG will be hosting an open workshop on service-oriented development, covering principles, architecture ideas and implementation guidance on October 13-15 in Düsseldorf, Germany.

The workshop will be held in English, will be hosted by my partner and “Mr. Methodologies” Achim Oellers and myself, and is limited to just 15 (!) attendees to assure an interactive environment that maximizes everyone’s benefit. The cap on the number of attendees also allows us to adjust the content to individual needs to some extent.

We will cover the “services philosophy” and theoretical foundations of service-compatible transaction techniques, scalability and federation patterns, autonomy and other important aspects. And once we’ve shared our “services mind-set”, we will take the participants on a very intense “guided tour” through (a lot of) very real and production-level quality code (including the Proseware example application that newtelligence built for Microsoft Corporation) that turns the theory to practice on the Windows platform and shows that there’s no need to wait for some shiny future technology to come out in 2 year’s time to benefit from services today.

Regular pricing for the event is €2500.00 (plus applicable taxes) and includes:

  • 3-day workshop in English from 9:00 – 18:00 (or later depending on topic/evening) 
  •  2 nights hotel stay (Oct 13th and 14th)
  • Group dinner with the experts on the first night.  The 2nd night is at your disposal to enjoy Düsseldorf’s fabulous Altstadt at your own leisure
  • Lunch (and snacks/drinks throughout the day)
  • Printed materials (in English), as appropriate
  • Post-Workshop CD containing all presentations and materials used/shown

For registration inquiries, information about the prerequisites, as well as for group and early-bird discount options, please contact Mr. Fons Habes via If the event is sold out at the time of your inquiry or if you are busy on this date, we will be happy to pre-register you for one of the upcoming event dates or arrange for an event at your site.

Categories: Architecture | SOA | newtelligence

July 15, 2004
@ 11:17 AM

We have a bit of a wording problem. With what I am current building we have a bit (not precisely) a notion of "tail calls". Here's an example:

public void LookupMessage(int messageId)
MessageStoreService messageStore = new MessageStoreService();

The call to LookupMessage() doesn't return anything as a return value or through output parameters. Instead, the resulting reply message surfaces moments later at a totally different place within the same application. At the same time, the object with the method you see here, surrenders all control to the (anonymous) receiver of the reply. It's a tiny bit like Server.Transfer() in ASP.NET.

So the naming problem is that neither of "GetMessage()", "LookupMessage()", "RequestMessage()" sounds right and they all look odd if there's no request/response pattern. The current favorite is to prefix all such methods with "Yield" so that we'd have "YieldLookupMessage()". Or "LookupMessageAndYield()"? Or something else?

Update: Also consider this

public void LookupCustomer(int customerId)
   CustomerService cs = new

Categories: SOA

Carl invited me for .NET Rocks on Thursday night. That is July 15th, 10 PM-Midnight Eastern Standard Time (U.S.) which is FOUR A.M. UNTIL SIX A.M. Central European Time (CET) on Friday morning. I am not sure whether my brain can properly operate at that time. The most fun thing would be to go out drinking Thursday night ;-)   I want to talk about (guess what) Services. Not Indigo, not WSE, not Enterprise Services, not SOAP, not XML. Services. Mindset first, tools later.

Categories: Architecture | SOA

Marcus Mac Innes has a funny collage of our TechEd show. I've spoken to the folks at MSDN and there's a good chance that the video recording will show up on Channel9 soon.

Categories: TechEd Europe

July 12, 2004
@ 01:46 PM

I might be blind to not have seen that before, but this here hit me over the third Guinness at an Irish Pub while answering a sudden technical question from my buddy Bart:

<wsa:ReplyTo xmlns:wsa="">

Read the EPR binding rules section 2.3 in the WS-Addressing spec and you'll find out just like me how distributed "call-stacks" work with WS-Addressing, if your choice of communication pattern is the far more flexible duplex (or here) pattern for datagram-based message conversations instead of the rather simplistic request/response model. Of course, any endpoint-reference can be stacked in the same way. I always wondered where the (deprecated) WS-Routing "path" went, which allowed specifying source routes. I think I ran into it.

Categories: Web Services

July 12, 2004
@ 01:07 PM

I've had several epiphanies in the 12 months or so. I don't know how it is for other people, but the way my thinking evolves is that I've got some inexpressible "thought clouds" going around in my head for months that I can't really get on paper or talk about in any coherent way. And then, at some point, there's some catalyst and "bang", it all comes together and suddenly those clouds start raining ideas and my thinking very rapidly goes through an actual paradigm shift.

The first important epiphany occurred when Arvindra gave me a compact explanation of his very pragmatic view on Agent Technology and Queueing Networks, which booted the FABRIQ effort. Once I saw what Arvindra had done in his previous projects and I put that together with my thinking about services, a lot of things clicked. The insight that formed from there was that RPC'ish request/response interactions are very restrictive exceptions in a much larger picture where one-way messages and much more complex message flow-patterns possibly involving an arbitrary number of parties are the norm.

The second struck me while on stage in Amsterdam and during the "The Nerd, The Suit, and the Fortune Teller" play as Pat and myself were discussing Service Oriented User Interaction. (You need to understand that we had very limited time for preparation and hence we had a good outline, but the rest of the script essentially said "go with the flow" and so most of it was pure improvisation theater). The insight that formed can (with all due respect) be shortened "the user is just another service". Not only users shall drive the interaction by issuing messages (commands) to a systems for which they expect one or more out of a set of possible replies, but there should also be a way how systems can be drive an interaction by issuing messages to users expecting one or more out of a set of possible replies. There is no good reason why any of these two directions of driving the interaction should receive preferred treatment. There is no client and there is no server. There are just roles in interactions. That moment, the 3-layer/3-tier model of building applications died a quick and painless death in my head. I think I have a new one, but the clouds are still raining ideas. Too early for details. Come back and ask in a few months.

Categories: Architecture | SOA

July 9, 2004
@ 09:10 AM

Jimmy Nilsson is really good at spotting flamebait.

Categories: Architecture

Adieu, Userland.

Ladies, if you haven’t switched your feeds to this address yet (it’s a year now), now’s the time.

UPDATE: I've mirrored a few old stories from over there. The rest of the content is here anyways.

-----Original Message-----
From: []
Sent: Friday, July 09, 2004 12:10 AM
To: Clemens F. Vasters
Subject: Radio UserLand Renewal Reminder

Greetings from the community server for Radio UserLand 8. This is a reminder that your Radio UserLand serial number will expire soon.

This is the third renewal-reminder email. You will receive two subsequent reminders, one the day before your serial number expires, and one when your serial number has actually expired.

At any time you can visit the UserLand store [1] to renew your license for $39.95, so that you can continue to receive software updates and store content on UserLand's community server.

You have 2 days remaining in your Radio UserLand license for the XXXX-XXXX-XXXX serial number.

If you have any questions or concerns, please review the Radio UserLand website [2], or post questions on the mail list [3], or discussion group [4]; or simply respond to this email.

And thanks from all of us at UserLand for using Radio UserLand. We sincerely hope you like it and use it well.


Categories: Blog

July 8, 2004
@ 12:48 PM

Do I do this because I want to or do I do this because I need to?

Categories: Architecture

In my comment view for the last post (comment #1), Piyush Pant writes about the confusion around different pipeline models and frameworks that are popping up all over the place and mentions Proseware, so I need to clarify some things:

I'll address the "too many frameworks" concern first: Proseware's explicit design goal and my job was to use the technologies ASP.NET Web Services, WSE 2.0, IIS, MSMQ, and Enterprise Services as pure as possible and I did intentionally not introduce yet another framework for the runtime bits beyond a few utility classes used by the services as a common infrastructure (like a config-driven web service proxy factory, the queue listener, or the just-in-time activation proxy pooling). What my job was and what I reasonably succeeded at was to show that:

Writing Service Oriented Applications on today's Windows Server 2003 platform does not require yet another framework.

The framework'ish pieces that I had to add are simply addressing some deployment issues like creating accounts, setting ACLs or setting up databases, that need to be done in a "real" app hat isn't a toy. Such things are sometimes difficult to abstract on the level of what the .NET Framework can offer as a general-purpose platform or are simply not there yet. All of these extra classes reside in an isolated assembly that's only used by the installers.

The total number of utility classes that play a role of any importance at runtime is 5 (in words five) and none of them has more than three screen pages worth of actual code. Let me repeat:

Writing Service Oriented Applications on today's Windows Server 2003 platform does not require yet another framework.

I do have a dormant (newtelligence-owned) code branch sitting here that'd make a lot of things in Proseware easier and more elegant to develop and makes reconfiguring services more convenient, but it's a developer convenience and productivity framework. No pipelines, no other architecture, just a prettier shell around the exact Proseware architecture and technologies I chose.

To illustrate my point about the fact that we don't need another entirely new framework, I have here (MessageQueueWebRequest.cs.txt, MessageQueueWebResponse.cs.txt) an early 0.1 prototype copy of our MessageQueueWebRequest/-WebResponse class pair that supports sending WS messages through MSMQ. (That prototype only does very simple one-way messages; you can do a lot more with MSMQ).  

Take the code, put it in yours, create a private queue, take an arbitrary ASMX WebService proxy, call MessageQueueWebRequest.RegisterMSMQProtocol() when your app starts, instantiate the proxy, set the Url property of the proxy to msmq://mymachine/private$/myqueue, invoke the proxy and watch how a SOAP message materializes in the queue.

Next step: use a WSE proxy. Works too. I'll leave the receiver logic to your imagination, but that's not really much more than listening to the queue and throwing the message into a WSE 2.0 SoapMethod or throwing it as a raw HTTP request at an ASMX WebMethod or by using a SimpleWorkerRequest on a self-hosted ASP.NET AppDomain (just like WebMatrix's Cassini hosts that stuff).


On to "pipelines" in the same context: Pipelines are a very common design pattern and you can find hundreds of variations of them in many projects (likely dozens from MS) which all have some sort of a notion of a pipeline. It's just "pipeline", not Pipeline(tm) 2003 SP1.

User-extensible pipeline models are a nice idea, but I don't think they are very useful to have or consider for most services of the type that Proseware has (and that covers a lot of types).

Frankly, most things that are done with pipelines in generalized architectures that wrap around endpoints (in/out crosscutting pipelines) and that are not about "logging" (which is, IMHO, more useful if done explicitly and in-context) are already in the existing technology stack (Enterprise Services, WSE) or are really jobs for other services.

There is no need to invent another pipeline to process custom headers in ASMX, if you have SoapExtensions. There is no need to invent a new pipeline model to do WS-Security, if you can plug the WSE 2.0 pipeline into the ASMX SoapExtension pipeline already. There is no need to invent a new pipeline model to push a new transaction context on the stack, if you can hook the COM+ context pipeline into your call chain by using ES. There is no need to invent another pipeline for authorization, if you can hook arbitrary custom stuff into the ASP.NET Http Pipeline or the WSE 2.0 pipeline already has or simply use what the ES context pipeline gives you.

I just enumerated four (!) different pipeline models and all of them are in the bits you already have on a shipping platform today and as it happens, all of them compose really well with each other. The fact that I am writing this might show that most of us just use and configure their services without even thinking of them as a composite pipeline model.

"We don't need another Pipeline" (I want Tina Turner to sing that for me).

Of course there's other pipeline jobs, right? Mapping!

Well, mapping between schemas is something that goes against the notion of a well-defined contract of a service. Either you have a well-defined contract or two or three or you don't. If you have a well-defined contract and there's a sender that doesn't adhere to it, it's the job of another service to provide that sort of data negotiation, because that's a business-logic task in and by itself.

Umm ... ah! Validation!

That might be true if schema validation is enough, but validation of data is a business logic level task if things get more complex (like if you need to check a PO against your catalog and need to check whether that customer is actually entitled to get a certain discount bracket). That's not a cross-cutting concern. That's a core job of the app.

Pipelines are for plumbers


Now, before I confuse everyone (and because Piyush mentioned it explicitly):

FABRIQ is a wholly different ballgame, because it is precisely a specialized architecture for dynamically distributable, queued (pull-model), one-way pipeline message processing and that does require a bit of a framework, because the platform doesn't readily support it.

We don't really have a notion of an endpoint in FABRIQ that is the default terminal for any message arriving at a node. We just let stuff asynchronously flow in one direction and across machines and handlers can choose to look at, modify, absorb or yield resultant messages into the pipeline as a result of what they do. In that model, the pipeline is the application. Very different story, very different sets of requirements, very different optimization potential and not really about services in the first place (although we stick to the tenets), but rather about distributing work dynamically and about doing so as fast as we can make it go.

Sorry, Piyush! All of that totally wasn't going against your valued comments, but you threw a lit match into a very dry haystack.


Categories: Architecture | SOA

Benjamin Mitchell wrote a better summary of my "Building Proseware Inc." session at TechEd Amsterdam than I ever could.

Because ... whenever the lights go on and the mike is open, I somehow automatically switch into an adrenalin-powered auto-pilot mode that luckily works really well and since my sessions take up so much energy and "focus on the moment", I often just don't remember all the things I said once the session is over and I am cooled down. That also explains why I almost never rehearse sessions (meaning: I never ever speak to the slides until I face an audience) except when I have to coordinate with other speakers. Yet, even though most of my sessions are really ad-hoc performances, whenever I repeat a session I usually remember whatever I said last time just at the very moment when the respective topic comes up, so there's an element of routine. It is really strange how that works. That's also why I am really a bad advisor on how to do sessions the right way, because that is a very risky approach. I just write slides that provide me with a list of topics and "illustration helpers" and whatever I say just "happens". 

About Proseware: All the written comments that people submitted after the session have been collected and are being read and it's very well understood that you want to get your hands on the bits as soon as possible. One of my big takeaways from the project is that if you're Microsoft, releasing stuff that is about giving "how-to" guidance is (for more reasons you can imagine) quite a bit more complicated than just putting bits up on a download site. It's being worked on. In the meantime, I'll blog a bit about the patterns I used whenever I can allocate a timeslice.

Categories: Architecture | SOA | TechEd Europe

Simple question: Please show me a case where inheritance and/or full data encapsulation makes sense for business/domain objects on the implementation level. 

I'll steal the low-hanging fruit: Address. Address is a great candidate when you look at an OOA model as you could model yourself to death having BaseAddress(BA) and BA<-StreetAddress(SA) and BA<-PostalAddress(PA) and SA<-US_StreetAddress and SA<-DE_StreetAddress and SA<-UK_StreetAddress and so forth.

When it comes to implementation, you'll end up refactoring the class into on thing: Address. There's probably an AddressType attribute and there's a Country field that indicates the formatting and since implementing a full address validation component is way too much work that feature gets cut anyway and hence we end up with a multiline text field with the properly formatted address and stuff like Street and PostOfficeBox (eventually normalized to AddressField), City, PostalCode, Country and Region is kept separate really just to make searching easier and faster. The stuff that goes onto the letter envelope is really only the preformatted address text.

Maybe I am too much of a data (read: XML, Messages, SQL) guy by now, but I just lost faith that objects are any good on the "business logic" abstraction level. The whole inheritance story is usually refactored away for very pragmatic reasons and the encapsulation story isn't all that useful either. You simply can't pragmatically regard data validation of data on a property get/set level as a useful general design pattern, because a type like Address is one type with interdependency between its elements and not simply a container for types. The rules for Region depend on Country and the rules for AddressField (or Street/PostOfficeBox) depend on AddressType. Since the object can't know your intent of what data you want to supply to it on a property get/set level, it can't do meaningful validation on that level. Hence, you end up calling something like address.Validate() and from there it's really a small step to separate out code and data into a message and a service that deals with it and call Validate(address). And that sort of service is the best way to support polymorphism over a scoped set of "classes" because it can potentially support "any" address schema and can yet concentrate and share all the validation logic (which is largely the same across whatever format you might choose) in a single place and not spread it across an array of specialized classes that's much, much harder to maintain.

What you end up with are elements and attributes (infoset) for the data that flows across, services that deal with the data that flows, and rows and columns that efficiently store data and let you retrieve it flexibly and quickly. Objects lost (except on the abstract and conceptional analysis level where they are useful to understand a problem space) their place in that picture for me.

While objects are fantastic for frameworks, I've absolutely unlearned why I would ever want them on the business logic level in practice. Reeducate me.

Categories: Architecture | SOA

I keep blogging about great people who do amazing things. Here’s someone else who did an amazing thing: me ;-)  Meet the grand master and anchor of our branch of the Vasters clan: Richard Vasters.

My dad is a Bezirksschornsteinfegermeister (“district master craftsman chimney sweep”, very different job description from what it is in the U.S.) and a computer veteran without wanting to be. He’s been using PCs since 1987 and he’s constantly refusing to learn more than absolutely necessary and I keep being “support central”.

Because he’s so stubbornly refusing to deal with computer complexity I don’t even get upset anymore about saying “space” and “enter” whenever I have him navigate the command line over the phone (which luckily became very rare) or keep repeating any detail of sequences getting him through various levels of dialog windows. Also, even though my dad is a language genius (he speaks no other language than German properly, but he mysteriously manages to express himself to people from any corner on the planet just fine), he’s not even remotely willing to deal with “File” and “Edit” menus and insists on having the German “Datei” and “Bearbeiten” menus instead. I mostly don’t even know what most apps look like in German, since I am running the U.S. English versions all the time. Yet, to help him I need to know cold what’s going on his screen when he’s pressing this key and clicking that thing when I have to navigate him though stuff on the phone – otherwise I’d just go insane.

On the way home from TechEd  Amsterdam, I finally found time to upgrade his desktop from Windows 98 to XP (the XP carton was sitting idle on his desk for months), set up a wireless network (all the WLAN router was doing was to consume power) with internet connection sharing, firewalled everything, patched the desktop and the notebook up to the latest fixes, installed virus scanners (and sure enough found one), hooked up the printer to the WLAN hub and got him on MSN Messenger – and to my complete surprise (and shock because of the inherent consequences!) Remote Assistance actually works from my place to the “family mansion”. Now he’s thrilled that he can issue print jobs to the printer in the office from the terrace without any wires.

Dad uses his PCs (the desktop and the notebook) exactly for one thing: getting stuff done. He has little to no interest in how things work and if stuff is too complicated he just ignores it and gives up immediately, because it’s just too annoying. For me, he’s always providing an absolutely amazing “reality check”. Sure enough, a German news channel was running a special program on Linux while I was there and there came the inevitable question: “Is that something for me?”  Now there’s a support nightmare I am absolutely looking to avoid and answered with a resounding “No!”

You think I am just a stupid dork and too lazy? Here’s the challenge to a daring (sorry, !insane!) individual: Install Linux on the desktop box (the notebook will remain to be XP), offer a year of free German language phone support with 24 hour turnaround time and on-site support with 72 hour turnaround time and get it all to run and maintain it (including patches) so that my dad can comfortably run all the apps he needs to get his work done (sorry, all Windows apps with no Linux alternatives; so WINE would have to do it and the respective ISVs – all relatively small firms with vertical solutions – will deny any of their support on any non-Windows platform). If you succeed and make it through the year without having to check into a psychiatric hospital, I’ll happily admit that Linux is ready for the desktop. If you fail, you owe us a year’s supply of beer. I think that’s a fair deal, because it’s oh-so ready, you can’t fail at that task, right?

Categories: Other Stuff

July 5, 2004
@ 07:43 PM

Arvindra Sehmi, Architect Lead at Microsoft EMEA, father and mother of the Microsoft Architects JOURNAL, the inspiration and project lead for the FABRIQ, the man who's dragged me twice through Europe on the EMEA Architect Tour (2003, 2004 video archives) and the owner of the Architect Track at TechEd Europe is now finally blogging.

Categories: Blog | Other Stuff

We've built FABRIQ, we've built Proseware. We have written seminar series about Web Services Best Practices and Service Orientation for Microsoft Europe. I speak about services and aspects of services at conferences around the world. And at all events where I talk about Services, I keep hearing the same question: "Enough of the theory, how do I do it?"

Therefore we have announced a seminar/workshop around designing and building service oriented systems that puts together all the things we've found out in the past years about how services can be built today and on today's Microsoft technology stack and how your systems can be designed for with migration to the next generation Microsoft technlogy stack in mind. Together with our newtelligence Associates, we are offering this workshop for in-house delivery at client sites world-wide and are planning to announce dates and locations for central, "open for all" events soon.

If you are interested in inviting us for an event at your site, contact Bart DePetrillo, or write to If you are interested in participating at a central seminar, Bart would like to hear about it (no obligations) so that we can select reasonable location(s) and date(s) that fit your needs.

Categories: Architecture | SOA | FABRIQ | Indigo | Web Services

July 4, 2004
@ 08:42 PM
Categories: Other Stuff

July 4, 2004
@ 05:50 PM

Monday I'll start earnestly working on this year's "summer project". Last year's project yielded what you today know as dasBlog. This year's prototyping project will have to do with running aggregated RSS content through FABRIQ networks for analysis and enrichment, solidifying the newtelligence SOA framework (something you don't even know about yet and it's not Proseware) and architecting/building a fairly large-scale system for dynamically managing user-centric media for targeted and secure distribution to arbitrary presentation surfaces. (Yes, I know that's a foggy explanation). Will the result be free stuff? No. Not this time. Will you hear about what we learn on the road to Bumblebee? Absolutely.

Categories: Architecture | Other Stuff

July 4, 2004
@ 01:04 PM

Wow. Done. TechEd Amsterdam was great fun as each year (the partying was a bit less excessive than Barcelona last year, but Forte wasn't here so that explains a lot) and thanks to a great audience who liked my content, I got some really awesome scores this year. I have a couple of pictures and stories to blog, but I will give myself a day to settle in at home. I am really glad that we're now entering the rather quiet summer time. No travel for the next weeks. Goodness. 

Ah, if you happen to have pictures of the "The Nerd, The Suit, and The Fortune Teller" session, it would be great if you could share a copy with me or send me a link if you put them anywhere on the web. Haven't seen any so far. That session was a lot of fun for Pat, Rafal and myself.

Categories: TechEd Europe