June 30, 2003
@ 01:14 PM

(Some background reading for my DEV359 session in Barcelona on Friday July 7, 16:00, Room 7)


Summary of a Year with Aspects


A bit less than a year ago, I got a few little hints.


Early last year I had been playing around with the managed portion of the .NET Framework’s ServicedComponent infrastructure and wanted to smuggle code between the client and server side for purposes of validating parameters, monitoring and other things. I learned a lot about the interaction between Remoting and the Enterprise Services infrastructure, but found that there was no way to get interception working using managed code. So I talked to some friends at Microsoft about this and after quite a bit of begging they pointed me to the relevant public patents on the COM+ extensibility points, which are documented there – in legal speak – and nowhere else. I also got a hint or two on what GUIDs to look up in the registry and a few other tips, which all wasn’t much but enough to get things rolling. Armed with plenty of assembly-level debugging experience from the time when I wrote large COM frameworks in unmanaged code, I went digging. Deep.


Now, a year later, I have two activators and one policy almost working (more on that in a bit) and what I have done is – by all the Enterprise Services people at Microsoft tell me – possibly the only inside-COM+ extension ever built by anyone outside of Microsoft. And because most of the people in that respective product group are busy building the next generation base infrastructure for Enterprise Services, I even seem to be the only one who has been writing new code in that area for at least two years.


Still, I am about to give up.


The reason for that is technical but not really a problem of Enterprise Services or COM or the .NET Framework. It’s the fact that I am trying to use a beautifully designed extensibility point in the exact way it was envisioned, but for which nobody ever assumed that it’d be used anywhere but outside the product group.


Let’s call that a problem of “opaque aspects”. However, before I can explain the problem, I need to explain a little more how the Enterprise Services (COM+) infrastructure works internally. I am simplifying a bit here, but it’s enough to get the picture.


Whenever a COM object is created from any programming environment, it happens through CoCreateInstanceEx in the end. One in CoCreateInstanceEx, the component’s configuration including the server identity (DLL, process or remote machine), threading models and all of the other essentials is looked up from configuration. The configuration is actually a chain of providers. The first stage is an in-memory cache, the second stage reads the COM+ catalog (which is a very efficient, COM+ specific ISAM database) and the third stage goes to the registry. If a component of which an instance shall be created is found in the COM+ catalog, it is called “configured” and instances are constructed using the COM+ infrastructure. Object construction happens through a chain of so-called “activators”. An activator is a COM object that gets associated with a component through one (or multiple) entries in the catalog. Each component can have any number of activators in each “stage”. The stages indicate on which level the activation process is currently working: client context, client machine, server machine, server process and server context. In each stage, an activator can perform work that needs to be done before the next stage can be entered. If you want to add a “policy” to a newly created context, you will do so at the “server process” stage, because the context setup needs to be complete before the activation process can enter into the “server context” stage. When you install a policy you are indeed usually adding two things to the context: the first thing is usually called a “property” and is an object that can be accessed (by those who have the right header files) using the object context at the application level, the second thing is an interceptor that can subscribe to get notified whenever a call passes in and out of the activated object’s context. Both, the interceptor and the property can be implemented on the same class and that’s what I usually do.


So, in short, the role of the activator is to deal with object creation, the property maintains related state and the policy acts on calls entering and leaving the context. If you configure a class to support just-in-time activation (JITA), the policy will inspect the “done” bit in the property on the call “leave” event and deactivate and disconnect the object if it’s set. When the next call gets in (“enter”) a new object is created and connected. If you configure transactions, a transaction is created by the policy on “enter” and terminated on “leave” if the “done” bit is set or whenever the context is closed. All of COM+ is based on these three elements.


I wrote two activators/policies. The first activator redirects activations into a secondary, configurable application domain in order to fix the problem that all managed Enterprise Services components end up being created in the default domain. The problem with the default domain for out-of-process components is that they all live on top of dllhost.exe and therefore the XML configuration file for all Enterprise Services apps is dllhost.exe.config in the Windows system directory. That’s annoying and therefore I decided to fix that.


The second activator and the policy exists to enable custom extensibility. The goal is to intercept all calls with all inbound and outbound parameters and pass this information on to custom, managed extensibility points that are attached to the class metadata using attributes. So, in essence, that’s an attribute-driven way to implement aspects. What the “AOP people” use pointcuts for in AspectJ is done here using attributes. It’s just a different way to express the necessary metadata to get the interception (“weaving”) going.


So, if you write an attribute (aspect) class called “GreaterThanAttribute” that implements a specific interface and put that on a method parameter like void MyFunc( [GreaterThan(1)] int param ) the aspect is going to be called every time just before the function actually gets invoked on the server. If the validation rule is violated, the aspect can throw an exception and deny the call to proceed. That was the idea and I got it to work – almost.


The “almost” is the sad part of the story. The code has been at 98% complete for the past four to five months and that’s where I am stuck.

There are multiple problems and most of them are related to the way Enterprise Services (managed) is cheating on the unmanaged COM+ infrastructure in order to keep managed calls managed calls and avoid COM/Interop. When you make an in-process call from managed code to managed code, there is no COM call. In fact, all that COM+ learns about the call is that it happens. It doesn’t learn about the exact object that the call is happing on, it doesn’t know about any of the parameters, it just doesn’t know. When you make an out-of-process call from managed code to managed code, there is also no proper COM call in most cases. While the call will come in via COM transport, the actual call data is contained within a binary package passed to on of the methods on the IRemoteDispatch COM interface. The managed implementation of that will unmarshal that package, finds a Remoting IMessage object and will dispatch that on the managed server object. These call paths exist in parallel with the support for inproc and outproc calls from unmanaged clients.


None of the existing COM+ policies ever looks at parameters, but because I wanted to allow parameter inspection I actually had to get at the parameters. Here’s where it gets hairy. For inproc, managed-to-managed calls, there is no COM call and therefore all the information about the call turns out to be NULL in the calls on the policy. No information. How shall I get at the parameters if all I have is nothing? I remember staring at the call stack in the debugger with little hope to get anywhere (that was several weeks and a couple of thousand lines into the project) and seeing everything being NULL, while all information I wanted to have was twelve stack frames above the current position on the stack.


The solution for that problem starts ugly: __asm mov __EBP, ebp. I ended up writing a custom stack walk (and having to compensate for an odd __cdecl frame) that figures out the right frame by a certain signature and steals the necessary parameters from “up there”. That worked. The outproc, managed-to-managed case was fairly easy, because I could simply unmarshal the IMessage myself using the BinaryFormatter. What turned out to be way more complicated than thought was the “traditional” case of unmanaged calls. First, I need to decode IDispatch::Invoke calls and correlate them with the target object “by hand”. That’s hard. Secondly, I need to chain in a universal interceptor that proxies each and every interface on the actual backend object in order to see calls that come in as regular COM calls. In essence this means that the activator will have the default activator create the backend object first, wrap the reference into the interceptor class and return the interceptor. Here’s where it gets ugly.


The “tracker” property/policy that gives you all the cute spinning balls and the somewhat useful statistics in the Component Services explorer doesn’t like the interceptor. While what I am doing is perfectly legal COM, the tracker just doesn’t expect that sort of thing to happen and gets confused. Just in time activation and object pooling have similar issues with that interceptor and either are hard to convince to deal with it (JITA) or simply crash (Pooling). The more services and combination of services you look at, the more colorful the effects become. COM+ is a well-tuned, perfectly integrated set of aspect-like services. The issue is that they don’t expect strangers to show up in the house. Once you introduce any significant changes into the behavior of the infrastructure, the problems you need to deal with get totally out of hand.


The underlying problem is that with aspects, in general, you get the same problems as with objects vs. components. Chaining an aspect into an activation or call chain is very much like overriding a virtual method of a class whose behavior you don’t fully understand. Because the combination and resulting order of aspects results in unknown preconditions for the activities of your code, you will have to understand the interaction of any configuration’s resulting set of aspects in order to get everything right. And just as with classes where you can override virtual functions that either means you will have to have the full source code to look at, change and recompile or very precise documentation to get things working, at all. The real problem is that the problems never end. You develop your aspects assuming a set of pre-existing other aspects that you need to be friendly to and someone else does the same. You combine the two resulting aspects on a single class and everything breaks, because the other person’s aspect doesn’t know to be friendly to yours.


There are very few use-cases where aspects can ever be truly independent of other aspects. “Passive” aspects like logging and monitoring seem harmless and “gatekeepers” like argument validation and custom authorization aspects are such use-cases.


However, even those may have important dependencies. If you log call data into a database for statistics, billing ore other purposes, what do you do if the call is transactional and fails? Do you want to roll back the call data, too? If so, you need to be behind the transaction aspect, if not you need to be before it. If you validate arguments and throw an exception before the call is ever executed, does that get logged and how? If you introduce custom authorization that should definitely happen before transactions are created. This list could go on forever.


Don’t get me wrong, I still see value in the interception approach for this small set of use-cases if you know what you are doing. You can save a lot of code by declaring the need for services in the Enterprise Services way instead of using them imperatively. However, for multiple development organizations to “cooperate anonymously” the model of putting aspects into a simple processing pipeline that acts on messages as they pass in and out of a context or to and from a method is severely broken and insufficient. In order to make that model work, we need something like COM. No, not the technology itself, but we need something that does for aspects what COM did for objects: Allowing multiple parties to build composable parts that can be queried for their requirements and capabilities and implement well-known protocols for effective coordination. I still think that’s entirely possible to do and – as I have mentioned earlier – I have some ideas for such a framework model, including using 2 phase-commit style processing, but that’s not going to fix the problems one faces in existing environments.


I learned a lot doing all this work, so it was definitely not a waste of time. I will move the Enterprise Services aspect framework portions out of the core utility assemblies and into set of special assemblies and declare it as “for experimental use only” for now.  You only ever really learn when you fail ;)


June 29, 2003
@ 11:04 AM


It's TechEd Europe time. I got into Barcelona yesterday, arriving from Tunis where I spoke at the first North Africa Developer Conference. It was great event at a fantastic location. I mean ... how much better can it get that a beach resort where the session rooms are 500m away from the beach. Malek Kemmou, the Microsoft Regional Director from Morocco and the speaker rock-star in North Africa was the most excellent host one could imagine.

Now I am sitting at the speaker's hotel in the lobby (wireless enabled) checking email and prepping demos. I am looking forward to meet lots of friends from all around Europe, to quite a few parties and I am sure my talks are going to be fun, too. And what's even better: This event marks the end of a stretch of about 4 months (excluding that one week of vacation) that I have been almost constantly on the road. After Barcelona I will actually get some time at the office, start to figure out new stuff and can complete and consolidate a bunch of things that I started during this time but could never get quite done. And that also means that this blog will be much less boring than it was in the last couple of weeks ;)


Conceptual talks vs. coding talks

I am in Frankfurt now, coming from Dubai, going to Tunis. Yesterday I talked about the relevance of contracts and agreement in a Web Services environment and about scalability patterns to use with Enterprise Services at the Microsoft Research "Crash Course" that was organized for professors from all across the Middle East, Eastern Mediterreanean and African regions. This wasn't my first event in the academic space, but certainly the largest so far. And it's a very different audience to address, indeed. In two 55 minute talks I spent less than 5 minutes each highlighting and explaining a couple of product features and the rest of the time was only about underlying concepts and strategy.

I start to think that these types of talks just make more sense to conference attendees than "coding sessions". Everyone can pick up a "how-to" book, read the reference material or poke around in samples at home. For me, an ideal conference inspires, highlight things that are off the beaten path and provides insights into the "why" more than into the "how". Having said that, it's pretty frustrating when you are have given a purely conceptual talk that went really well, you get great feedback from the interested people you talk to afterwards and then you get a comment like "very bad, there was no demo" or simply "more demos" in the written feedback. I should probably start blogging attendee comments and write my comments on comments. I guess I'll do that for TechEd Europe. Beware ;)


June 22, 2003
@ 05:29 AM

Hitting the road again...

I didn't take the notebook and didn't even cheat last week (no Internet Cafes, no hidden PocketPC, etc).

I went to a small sea resort at the Baltic Sea in one of the "new" states in the eastern part of Germany. The average age of the tourists there seemed to be about 65 and hence there was no danger of wild partying, which was indeed a Good Thing. I stayed there for three days and then went to see the sights of some cities "up north" on a lazy two day trip back home (it'd usually take me about 8 hrs of driving). I went to Wismar and Schwerin, places around Lübeck and also did the essential touristy thing in Hamburg by taking a boat tour through the seaport (which is the 2nd largest in Europe). I haven't been too much to our eastern states in recent years and it's great to see how the visible differences between "the East" and "the West" have vanished for the most part and that what's left are just normal regional differences. All in all it was a week that most people would probably call "horribly boring" :)

Today's travel destination is Dubai -- now again with a notebook and using the time on plane to prep for tomorrow's event ;)


June 16, 2003
@ 07:20 AM

Out of here....

I am taking a week off and will drive up north towards the sea. Computer stays behind. I need a break. Will be back on Saturday and then pack up to go to Dubai and Tunis (for the North Africa Developers Conference) and then to Barcelona for TechEd Europe, which is surely going to be the event highlight of the year. I give five talks in Barcelona, including a chalk talk together with my very newtelligent colleagues:

Chalk Talk: Gotchas from porting DNA to .NET (CHT012)
Speakers: Clemens Vasters, Achim Oellers, Joerg Freiberger 
+ 2 July 2003, 08:30h

Microsoft® .NET Web Services Internals : I Didn't Know You Could Do That!  (WEB404) 
+ 2 July 2003, 18:15h

Loose Coupling an Serialization Patterns : The Holy Grail of Service Design  (WEB400)
+ 3 July 2003, 10:00h

Layers and Tiers (DEV387)
+ 3 July 2003, 18:15h

Aspect-Oriented Programming (DEV359)
+ 4 July 2003, 16:00h


June 15, 2003
@ 01:56 PM


It's neither finished nor perfect, but it's a lot of code and I want to get something out before I leave for my short vacation. A current "daily build" of the newtelligence SDK. May or may not work for you, it does for me. Includes, mostly in source code form, all the base classes I used for the TechEd Demos. Please be advised that everything related to newtelligence.EnterpriseServices.AspectServicedComponent currently breaks with object pooling and leaks lotsa memory in any out-of-process case. Those issues doesn't affect any of the other classes.  Readme, MSI (3.6MB)


June 14, 2003
@ 07:41 PM

It got increasingly difficult to distribute my sample code, because the dependencies on the set of standard libraries that I've built over the past months just keep growing. I fell in the same trap for the "polished" version of the samples for TechEd so that I decided to make a cut and make a whole "SDK" out of this stuff, finally (I think I've written about that a while ago) and simply make it a prerequisite for installing additional demos. That takes away a lot of the repetitive work for building installers over and over again. The MSI file is almost finished and I think I should be able to make a first drop available by tomorrow.

From Monday until the end of the week I will take a desperately needed week of vacation. I'll just get into the car and drive up north to the sea. No computer in the luggage. I won't even try to read email. 



June 12, 2003
@ 04:33 PM
Sorry, Testing. Welcome DIPLOMAT, bye bye AMBASSADOR. I just inaugurated a brand new 60GB harddrive for my notebook by installing a new OS and I am finally, finally on Win03 for good. Nothing beats lots of RAM and a fast disk. That notebook is screeeeaaming now.

Ingo has some new guidance around Remoting. While I don't agree with him on giving any "perfectly okay" marks the WAN case in his feature/scenarios matrix, most of the rules make sense to me for a LAN scenario that doesn't need to scale heavily. Still, the "put state into the database" rule is a bit strict and probably needs a little more thought; after all, the database is just another shared service. Also, in a LAN where security doesn't play much of a role, you probably also don't need to scale unexpectedly as it happens on the web and therefore the whole "host in IIS" and "single call only" business seems a bit strict, too.

And if you didn't believe me until now, here's Ingo with some definitive guidance around ASMX, ES and Remoting that I obviously very much agree with:

  • If you plan on using SOAP Web Services to integrate different platforms or different companies, I really urge you to look into ASMX (ASP.NET) Web Services instead of Remoting.
  • Do not try to fit distributed transactions, security, and such into custom channel sinks. Instead, use Enterprise Services if applicable in your environment.

I summarize his guidance for an old COM guy like myself as:

Whatever was cool with OLE Automation* is cool with Remoting. If the use-case looks much different, look elsewhere. 

*(no security or scalability to worry about, chatty interaction, stateful objects, events, late binding)


June 11, 2003
@ 08:22 AM

June 10, 2003
@ 01:09 PM

"ServicedComponentEx" broken: The code that I published a year ago here in order to fix the dependency of ServicedComponents on the default appdomain (and its config) no longer works with the Framework 1.1. The reason for that is that the cross-appdomain case of Remoting now explicitly filters calls to IRemoteDispatch and rejects them. The way the proxy attribute of my hack works is that it redirects the activation into a different AppDomain and therefore causes a "double proxy" to be created. The proxy that the ES infrastructure talks to is indeed a cross-appdomain proxy which talks to a transparent serviced component proxy in the target appdomain. If a managed call comes in from a different process, it wants to talk to IRemoteDispatch to circumvent COM/Interop (IRemoteDispatch is where the DCOM tunneled binary serializer packages get dropped in) and that call then gets forwarded into the secondary app domain through the cross-appdomain proxy.

Now, in the 1.1 Framework, any IRemoteDispatch calls through Remoting are explicitly rejected and hence the hack no longer works. Bummer. However, I did expect some things to break and this should be a reminder that any hacks in undocumented territory that you may find anywhere are not guaranteed to work in the next version, even point releases of the Framework. I am poking around to get the functionality restored, but based on what I've figured out so far it doesn't really seem possible without either violating the rules of the game in very horrible ways or moving this functionality into the core framework I built to make aspects work .... 


June 9, 2003
@ 11:14 PM

Christians: Herr Weyer gets it (finally) and Herr Nagel started a blog (finally).


Craig Andera on why AOP is broken & Why I surprisingly agree and what I am doing about it

Craig Andera has some interesting thoughts around AOP and specifically mentions the stuff that I have been doing in that area. And he say that it doesn't work and never has, because services are never truly orthogonal and have various interdependencies. In essence he's saying (I guess) that because the interdependencies just create a whole new level of complexity, the AOP approach is broken and it's better to generate explicit code instead of using interception techniques. I partially agree and always put a warning at the end of all of my talks around this issue: There is a limited set of use-cases for which an aspect'ish approach is useful. Security, logging, monitoring, billing, transaction enlistment, and a few others.

One of the biggest problems is service-order. You need to run the decryption and signature verification services before you can even evaulate a header that any other service can use. And even then, when you have something like a transaction-enlistment filter, do you open the transaction before or after a logging service wants to write something to a database? Does the logged data need to stay in the logging store when a transaction aborts? Yes? What if the log is used for billing? No? What if the log is used for diagnostics?

However, being explicit when chaining services together doesn't make things any better than using interception:

catch( Exception e )
   // do proper handling

is just as broken. I don't think it fundamentally matters much how code gets woven into the call chain. Setting up contexts is just one issue. What's even more difficult is to find a way to deal with errors in the presence of cooperating aspects (or, in more general terms, interception services). What's clear is that there's no way around interception-driven services in a web services world. It's all pipeline-based and, even worse, the pipelines are distributed pipelines of pipelines. It's too simple to say "it's broken, get over it". That doesn't help solving what is an actual problem.

A promising approach is to make aspects/interceptors act like resource managers and coordinate their work using a very lightweight 2PC protocol ("AC" guarantee only; no "ID"). Using 2PC for this approach allows interceptors/aspects to coordinate their work and know about each other before any work actually gets done. I have discussed these issues with a couple of people in depth we put some code together that essentially implements a little, in-memory "DTC" for that purpose. We call it a "WorkSet" instead of a transaction.  There's still some work to be done there, but I think I'll be able to post an example in a little while. Maybe around TechEd Europe time.


June 8, 2003
@ 04:56 AM

Sam on Perf

Sam outs himself as a fan of low-level performance optimization. That's all good and fair, but often micro-optimization just takes way too much time with way too little of a result for the overall application throughput and its scalability. For distributed apps, the true optimization happens during the architecture phase. Or, as my friend Steve Swartz put it during our "Scalable Apps" tour: When you are stuck in a traffic jam with a Porsche, all you do is burn more gas in idle. Scalability is about building wider roads, not about building faster cars.


On 400 level sessions and scores

Samer Ibrahim writes "I believe that a 400 level session should present 400 level material regardless of how many people have never wrote a single line of code in their entire life.  That's not my problem and that's not fair to those of us who are here to get an edge.  Find 100-200 level sessions instead."

My WEB404 session at TechEd US was probably a 500 because I really had lots (too much) of code. The downside of doing 400 level sessions at an event with a very broad audience spectrum is that you are getting killed in the feedback and scores after the talk, no matter what you do. Either you're too shallow for some or you are too deep down in the bits for others. Now, what needs to be understood is that speakers will often scale back on content if they feel that the content is too deep for the audience they have, just because it'll kill their average score. There's lots of competition behind the scenes on that.

What was new at this TechEd was that the written comments are now available to MS in "softcopy", which means that they get printed up with the numbers. And if you have only 10 people in an audience of 300 who write "Thank you, this session was really helpful for me", you feel like you have done your job right and MS sees that too, which is of much higher importance for "us" external speakers than the average score.

So, here's a hint: My understanding is that the scoring system is still open over the weekend at www.mymsevents.com. If you attended a session that you found helpful and on which you haven't given a score so far, do so and don't forget to write a comment stating what you liked or what you would like to see improved. That's especially true for sessions with deep and focused technical content and lots of people in the audience. These will typically get comparatively bad scores, because it's nearly impossible that the content is absolutely relevant for 400 or 600 people in a room at a conference like that. So, if you think that the speaker did a good job, say so. You'll be heard.

(I should add that I am fairly happy with my scores already and I am not begging ;)


Andres observes that Steve and I are in agreement on very many things, including what to put on slides in talks covering services, layering and tiering.  ;)


Clemens - I attended your two sessions, AOP and "I dont know you could do that". Excellent stuff. Couple of questions I have:
1. I heard, and I might be wrong(and please correct me if I am), that you have serious issues with .Net Remoting. Is that true, and if it is, why? 2. In an app where you want to cache objects, would you use com+ object pooling, or are there better ways to cache your objects?
And last, but not least, have u written any papers? And can you tell me any good book to go deep into the stuff that you talked about?

Thanks Ali / Ali Khawaja • 6/6/03; 5:53:13 PM

1. I don't have serious issues with Remoting as such. I am just saying that it is the successor of Automation and not of the full blown DCOM model. Hence, it is useful in all the scenarios (mostly on-machine) where Automation is useful in the unmanaged world. Once you go across machines where security plays a role and when you need an appropriate hosting and process model for your objects, there is Enterprise Services. Whenever you see a need to add a custom channel sink to Remoting for authentication, authorization, encryption, or signature, there is a fair chance that you are using the wrong technology set. Whenever you think you need to write a custom host for you app in order to tune the thread pool and up the number of available threads for Remoting, you are using the wrong technology set. There's nothing fundamentally wrong about Remoting -- there's just a limited set of use-cases where it is applicable. My issue with it is only how many people are using it and how it is being portrayed as the successor to DCOM, which it is not.

One thing is important to keep in mind: The COM transport sits on top of Microsoft RPC, which is, in turn, the core technology stack that essentially powers most call-level communication between the components of Windows and hence has had full kernel support ever since the NT kernel saw the light of day. RPC supports virtually all network protocols as well as shared-memory marshaled L(R)PC [read!] for on-machine calls. Remoting sits on top of the CLR and on top of the Framework, which, in turn, sits on the Win32 user-level API. That's a wholly different ballgame.

Enterprise Services has a very elegant solution for mixing the two models in that it uses Remoting to do almost all marshaling work (with two exceptions: QC and calls with isomorphic call sigs) and then tunnels the serialized IMessage through DCOM transport, which means that you get full CLR type fidelity while using a rock solid transport that has been continuously optimized ever since 1993. I understand that some people consider a 10 year old protocol boring; I just call it "stable". Also I see people complaining about COM being hard to deploy, because it requires use of the registry and distribution of proxies. Admittedly, there's some truth to that, but in the end, you will also have to deploy and customize config files for Remoting and distribute proxies there. That's true for any RPC-type technology and is, as per current practice, even true for most Web Services. For distributed systems of any scale, "xcopy deployment" is a sweet dream. There's work to do.

2. Yes. Enterprise Services object pooling is great to pool object instances and guard access to limited resources.

Finally, I have written a book on Enterprise Services, which is, for a variety of historic reasons, in German. However, I am talking to a publisher for translation and once that happens I will definitely rev it so that it incoporates all of my "current" thinking (of course).


June 7, 2003
@ 07:05 AM
Ingo has WSE 2.0 and is obviously all excited about it.

June 6, 2003
@ 03:47 PM


PerfectXml.com has a redirect tool up that presents this blog (and everyone else's blog) on their site: http://www.perfectxml.com/RSSConnect/RR.asp?u=http://radio.weblogs.com/0108971/rss.xml. Since they didn't get back on my email, I'll have to tell them in public:

Any content of this blog is my property. The RSS feed is available for aggregation and personal information by anyone, BUT if you republish my weblog on your website without my permission, you are stealing intellectual property and you are violating my copyright.  Take that redirector down or block my blog.


June 5, 2003
@ 10:02 PM

Thank you, Julia. I am glad you liked my TechEd sessions and thank you for the kind words :)  However ...

What is so strange is that I cannot get  used to seeing him open up and work in Visual Studio. Why on earth is that? Perhaps it is something to do with the level of what he is talking about that it is bigger than coding, so though he obviously needs to code to put the concepts in action, it just seems almost mundane in comparison to the concepts.

Hmmm .... I am not sure whether I agree here. Most of the things that I talked about were really about code all the way and then I can just as well show some (or flood the audience with codce as in WEB404). The takeaway are the concepts. My job is to bring lesser known things into the limelight. In that I do agree.


TechEd / DEV359, WEB404 related code

Earlier builds and some explanation of the stuff that I have been showing in the talks can be found here (Enterprise Services AOP) and there (Web Services Extensibility). These builds are for Visual Studio .NET 2002. The builds for the new version will -- as said -- be available some time next week. Don't complain... it's free stuff, after all ;)


TechEd / Getting stuff out the door: Code for DEV357

Pending a more polished and documented version (which I'll publish some time next week), here's "just" the zipped up directory with the code from the DEV357 session (Building distributed apps). 41 C# files, 375KB of source code. Way too much ;)

The code for DEV359 and WEB404 is a bit more difficult to pack up, because it's much harder to deploy and get to work without a proper installer. Unfortunately all WMI support for the Framework died on this machine this week ("Provider load failure") and is fubar and therefore I can't test the installation procedures to put stuff into machine.config. So that may have to wait until next week :(


June 3, 2003
@ 07:49 PM

Demos, demos, demos

I should probably stop writing more stuff for my Thursday demos. I think that some 10000 lines of "giveaway" source code should be enough ... but somehow I feel like I am still not done yet. Here's a quick list of the stuff that I have with me to show.

  • Aspect oriented programming with Remoting and Enterprise Services and Web Services using the same extensibility model
  • An attribute driven validator for object-graphs of structures and classes of arbitrary depth (sort of like schema validation for XML, but on objects)
  • Schema facets (maxLength, minLength, pattern, etc.) generated into the WSDL from [WebMethod] parameters and data structures
  • Just in Time Activation pooling as a the ultimate performance booster for Enterprise Services
  • A multi-tier "cascade" for data services for that can serve up read-only data from cache (memory) or isolated storage or a remote web service or straight from a SQL store through the flip of a config file entry and which are connected in a way that the isolated storage refreshed through the web service which then walks up to SQL.

What I will show in which detail largely depends on how the talks go in terms of timing. Since my DEV357, DEV359 and WEB404 talks are all back-to-back-to-back (different rooms, though), I will essentially be using one larger demo for all of them (and I am still putting it together right now .. cough!). In DEV357, I'll primarily talk about the relationship between ASMX Web Services and Enterprise Services and how to use ES efficiently as a backend for ASMX. In DEV359 I'll drill down into the "aspectish" elements of the demo application and talk about separation of primary concerns ("why you write the app") and secondary concerns ("stuff that needs to be done, too"). In WEB404 I will show how I teach "Add Web Reference" to generate code that has references to stuff in "newtelligence.Web.Services" in it and how I can make the schema in ASMX's generated WSDL a bit better.

I will try to post links to as much of the actual source code for the demos and its support libraries here until the end of the week. Don't expect anything before Thursday, though. Right now I am writing installers, because I don't want to make it unneccessarily hard for all of you to try the stuff at home.


TechEd: Meet Juval Löwy and me at the INETA booth

Juval Löwy and myself will be at the INETA booth (Aisle 600) in the expo area at TechEd today between Noon and 1:30pm and between 3:15pm and 5:00pm. So, if you have any questions about Enterprise Services or Web Services just come over and I am sure that Juval will have an appropriate answer for you ;)


"Power Lunch with Don Box and Friends"

As said before, I got invited to a fun lunch panel discussion with Don, Yasser Shohoud, and Steve Swartz. We chatted about 45 minutes about things we all like and dislike about the .NET Framework as it ships today, about XML and SOAP standards, how to build Web Services "right" in .NET, about the unfortunate split between infrastructures for Remoting, Enterprise Services and ASMX and plenty other little things.

Quote of the day:

  • Don: "So, Steve, is COM dead?"
  • Steve: "There's a time when you are growing up and everything is exciting at the time. There's always new things, new stuff to look at, it's all cool. And then at some point you're grown up and it's not that you die when you're a grownup, right? So, COM is a grownup now. It just lives." 

Ping! .... from TechEd 2003 Dallas

I haven't been blogging for more than  two weeks now because of (a) being very busy on the road and (b) being sick all last week. After 8 or so weeks on the road, my body just went on strike and punished me for all the stress in many horrible ways.

Anyways, I am back online now, sitting in the speaker's lounge at TechEd Dallas checking some email, working on the samples for Thursday (details on that later) and just got invited by Don to participate in the "Web Services Roundtable: Power Lunch with Don Box and Friends" session that's at 12:15pm in the Arena. I don't know what's on the agenda, but Don said I should just come up. I am sure it's going to be fun.