Exactly. You don't need any of COM+ (which is really what to call it) unless you are trying to use the DTC. The rest is all done by the runtime already as you point out. [Sam Gentile's Radio Weblog]

No. Show me a COM+ feature except the non-transactional incarnation of the shared property manager (AppDomain statics may do that), which is exactly or even similarly replicated in the runtime. App-specific roles aren't, object pooling isn't, JIT activation isn't, QC aren't, process model (thread pool, app pool, pause, disable, recycling) isn't, event monitoring isn't, LCE aren't, memory gates aren't, secure RPC isn't, security context isn't and this list isn't complete. It's simply not right, Sam. 


John Lam shares his thoughts on Enterprise Services and says that many features of ES are replicated throughout the framework and therefore he sees little need to use ES anymore except for distributed transactions. This was my exact position about a year ago. At the time, I posted a lengthy statement with almost the same arguments to the Microsoft Regional Regional Directors (non-public) mailing list and since then I had a lot of very valuable discussions with a lot of very smart folks who mapped out the differences between what's in the framework and what's in ES and who have helped me to understand that I simply wasn't right with what I was saying.

Let me go over what John highlights:

  • Load balancing. Load balancing is a filter for work where the amount of work is either not predictable or can't be handled by a single system. You load balance as close to the "topmost" client as you can, handle parts of the load and reduce the need for load balancing downstream. If you have a website or web-service, you load balance the web-tier. Will you load balance the business logic tier? Possibly, but the load generated by one web-server towards its backend is typically sufficiently predictable to eliminate the need for component load balancing and rather make not-so-dynamic assignments of backend servers to a group of web servers. Component load balancing is only really good for when you can't load-balance the presentation tier (for instance if you have GUI clients) or if you have a huge spread in execution times for a single class (if you allow users to execute ad-hoc database queries).
  • ASP.NET as an application host. Hosting your business logic there is good, if you are all stateless. If you need to keep and share expensive-to-acquire application state around (such as large caches) or if you need to guard a set of limited resources that are limited for your entire web-farm, hosting there has limitations. The most important limitation of hosting business logic there is security. In ASP.NET, everything happens within the security context of the external caller (or its delegate ASPNET) and that's a problematic thing. You will want to do certain things with elevated privileges in the context of a service account and LogonUser() isn't really what you want to do in that case.
  • Roles. You can make your own user and role types in the .NET framework, but not many people do. You'd have to write your own admin tools, your own infrastructure and you'd have to provide a mapping to OS roles and users for infrastructure access. If you stick with OS roles (SAM or Active Directory groups, in essence) and use the PrincipalPermissionAttribute as a replacement for ES role-based security, you will lose a level of indirection. Instead of defining a role required to access a single method on a single class in a single application right there in that application, you will have to define that in Active Directory and have it replicate throughout your AD structure. There can be very many such roles.
  • Object Pool: Object pooling is a good workaround to overcome limitations of OLE DB, but that's just one aspect. It's a generic semaphore for classes. It helps implementing write access to any resource with limited or no concurrency control (the art of handling FileStream.Lock() is often long forgotten), it'll help you pre-initialize and control access to things like 3270 terminal screen-scrapers with very limited permitted concurrent sessions or maybe interfaces to physical devices of which you only have one or four (like a metal-sheet press). Nothing you couldn't do without ES, but ... it's already there and the number of folks who don't want to spend the time implementing all the required infrastructure goo is substantial.
  • Transactions: Even with a single resource manager, using ES for transactions is not a bad idea for complex systems. For the simplest case that a component method creates a transaction, does work and commits the transaction, using native database connections are a good thing and the fastest choice. If a component calls another component (which may call other components itself) and the transaction shall span those components, transaction management can easily get out of hand. You will have to pass the database connection and the transaction objects around (in the case of ADO.NET), you will have to negotiate who may commit or abort and you may have to collect votes on the outcome.  Also, since components are black-boxes, you will not always know whether a component you want to roll into your transaction doesn't require a second resource manager -- in which case your native database transaction can't be used. (Update: check out Ingo Rammer's comments on the same topic)

Still, I consider John's remarks as very valid for a large number of web-applications. COM+ no longer is the automatic default for hosting business logic as it was when stuff was implemented in ASP and VB6 components instead of ASP.NET. However, ASP.NET hasn't become the automatic default for hosting all things related to a web application, either. Your mileage may vary.



Why you want to use Enterprise Services for your .NET application
Part 1: Introduction
Part 2: Basic Architectural Considerations and the benefits of Processes and Process Models

UI -> BusinessLogic -> DataAccess. This three layer model is the most common way to separate functional blocks of applications. It's clear, simple and very obvious. If you're writing a web application, all stuff on top of ASP, JSP or ASP.NET is your UI layer, whatever is called by that layer is your business logic layer and that, in itself is split into a "logic" and into a "data access" layer, which serves to make your data access code more resilient against changes in data access technology, database product choice or, in the simplest and most common case, schema changes in the underlying data store. If you are writing a GUI application, all GUI-related functionality is in the top-layer and you may be able to use the same business logic layer as you are using for a web application. Ideally, all user-interface type and technology specific aspects are handled in the UI layer, and business logic shall be as resilient against changes in the UI as it is against changes in the data layer. That's why we have those layers.

I call them "layers", not "tiers". In my world (not necessarily in everyone else's), a layer is a purely logical concept. Layering is about separating functionally different areas of code. To me, a tier is a solely a physical concept. It is about how code gets distributed in a runtime environment. A GUI application, which ends up being compiled into a single EXE may be built using multiple separate layers just as much as a server application, which potentially maps each layer on a distinct physical tier. More likely is, though, that stuff from two or more layers gets mapped into a tier or that two tiers handle one layer. Example: SQL Server stored procedures and components manipulating ADO.NET DataSets using ad-hoc SQL are all things that belong to an data access layer. Still they are physically deployed in two places: Inside SQL Server and in a process that is accessing SQL Server from the outside. I would call that two separate tiers, but one layer. A "single EXE" GUI application has many layers, but possibly only one tier (if it does, for instance use the JetDB engine mapped into its own process space).

That's all well-known and very obvious to mostly everybody as long as the "UI" role is very obvious (Web front or GUI) and the "Data Store" role is very obvious (some RDBMS). Most commonly, (sub-)systems which feature this type of layers are running in "reactionary mode". They are triggered by some user activity and run one or more (potentially parallel) sequences of activities in response to the user activity. Not seldomly, architectural confusion begins whenever a system shall perform autonomous actions (for instance based on timers) or when the trigger for an activity is not a user, but some other binary lifeform. Where does that fit into the layering picture? Does it fit at all? Also, if my business code needs to invoke a remote system through a web service or needs to submit a document to a remote site using an infrastructure like BizTalk or if it simply wants to send an email via SMTP. Where does that go?

In my world, the acronym "UI" doesn't mean "user-interface", it means "use-case-interface". Everything that triggers any activity in the business logic layer is a "UI". A Web Services is a UI, a BizTalk Server application integration component (AIC) is a UI, a Windows Service process is a UI. The business code doesn't really care whether the current method call was originally triggered by a human being clicking anywhere on a remote screen. All such UIs can share a lot of code. Indeed, "business logic" is all code that is UI-agnostic in this expanded scenario.

The "Data Access" layer isn't just for databases. All code that accesses any functionality outside your own application and which is triggered by activities that root in your own application belongs there. If you call a remote web service or a remote application that's not under your own immediate control, you need to make your business code resilient against changes in those external applications. If you send a Word document attached to an Email via SMTP now, you may want to send an PDF document via other ways tomorrow. The fact that the information must be sent doesn't change, formatting and ways of sending does. So, I like to speak of "Infrastructure Access Layer" rather than "Data Access" layer to limit confusion.

What we're getting out of this are three separate layers of code. "Use-Case Interface", "Business Logic" and "Infrastructure Access". It's a good way to organize interfaces and code and works very well for working in large teams. What we're not getting out of this a consistent and reliable mapping to a runtime environment. Business logic will execute in the process space of  ASP.NET, a BizTalk AIC, a Windows Service or a desktop GUI application. All these process spaces are very different. The GUI EXE runs reliably as long as the user Alt-F4's it or until the machine becomes unresponsive courtesy of Windows Exploder. BizTalk will load and unload an AIC (and hence your entire layered model) for the duration of a single action. ASP.NET will load your code, but it'll sometimes recycle the process "suddenly" for various good reasons. A Windows service has a very predicable execution profile (starts/stops at boot/shutdown), but by itself it doesn't have a concept of communication with the outside world -- you'll have to make it an RPC or Remoting server or a Message Queue listener yourself and that will involve creating and maintaining worker threads, etc.

If you want to write applications that are dealing with data efficiently and truly scale, you will want to cache large parts of those 80% of all tables in your data model that hold static or near-static data in memory. You will want to keep pools of infrastructure objects ready and initialized. You will want to have pre-activated and smart  "gatekeepers" that guard access to limited or expensive external resources such as 3270 terminal sessions, remote web services with low bandwidth, etc. What you need is a predictable execution environment, which allows you coordination of such access to limited resources, which will allow you to keep caches alive and current and which provides you with a security boundary that will allow authorization for accessing services and security-identity switches that can access services with elevated privileges for such authorized users. What you want is to go "out of process".

"Going out of process" and hosting your business logic in a decicated environment is not "necessary evil", it's a carefully chosen and intended feature of your architecture. Enterprise Services/COM+ (and J2EE application servers) provide you with such a predictable hosting environment for your "Business Logic" and "Infrastructure Layer" components. "Going out of process" means that you will isolate your business code from unique behavior of your "Use-Case Interface" hosting environment.

What you get is a process with a well-defined process model. It'll create and manage thread pools, it'll manage external access, it'll provide you with a way to access this functionality from other processes. Using Enterprise Services applications "in process" is a special case for whenever you only deal with a single "Use-case Interface" and you are ready to deal with possible restrictions its process model imposes for your business logic. Hosting business logic "out of process" is the default.

That's why you want an application server environment. Enterprise Services/COM+ and most J2EE application servers provide such an environment. These principles count on the server, but also on the desktop. Paying the price of cross-process marshaling is not something you are forced to do under torture, it's something which you'll do because you'll get something for that price. Power = Work /Time.

Today's favorite Enterprise Services attribute: [assembly:ApplicationActivation(ActivationOption.Server)]

Next installment: Part 3: Management of expensive and limited resources



Why you want to use Enterprise Services for your .NET application
Part 1: Introduction

Yesterday I did a 4.5 hour talk about the relevance and basics of Enterprise Services here at TornadoCamp.NET. In our audience we have about 90% developers who have been using mostly VB6/VB5 up until now and more than half are writing "classic" client/server applications with (very) fat clients and the only server-side actions happening inside SQL Server, Oracle, Sybase by way of stored procedures. What I've found here is consistent with what I find at our other workshops and very many other events where I speak: Only very few developers really ever used COM+ or MTS for anything but server-side transaction handling and the majority didn't even look at Enterprise Services/COM+/MTS ever, at all.

Why that is the case is easily explained and there are two primary reasons:

(a) Visual Basic 6 (and previous versions) is the most popular language for writing business applications on Windows, at least with our customers and the people I usually talk to at conferences and events. COM+ provides quite a few very useful features, which either can't be used from within the VB's "STA ghetto" due to its inability to produce thread-safe code (like ObjectPooling) or which are very difficult to deploy without rather complex installation scripts (like "loosely coupled events") .

(b) The main reason is a different one: COM+ provides the implementation of a lot of common architectural patterns and solutions to very typical functional challenges. If I either don't understand these patterns or, more often, don't see an obvious mapping of such a functional challenge that I find in my project to a feature provided by COM+, I simply won't use it. The dilemma: If you don't really know what's in COM+ feature bag, you won't be able to find out why you'd ever want to consider using it. If you have no interest in COM+, you will not buy a book on it. For most developers, all feature areas of COM+ beyond "Transaction.Required" therefore remain in the dark.

So, instead of blogging random Enterprise Services features out-of-context (such as CoRegisterSurrogateEx), I will try to illustrate the "why" and use-cases for several (best:all)  Enterprise Services/COM+ services in a very compact, blog compatible form, which will hopefully create a context for the other obscure things I am typically writing about and will allow more people to see why this stuff is very relevant for their apps.

Next installment: Part 2: Basic Architectural Considerations and the benefits of Processes and Process Models



I start to wonder whether it may make sense to do a conference or tour really dedicated to Enterprise Services in XP/.NET Server.


CoRegisterSurrogateEx continued: Tomas comments here in the blog: "Let me see if I get one thing straight: You say we should _enable_ the application when the host starts up and _disable_ it when it shuts down, right?"   --- Yes, exactly.

"While we're on it, I think in many cases it would just be enough (and easier, up to a point), to simply make the application components fail activation if they're not running inside your custom surrogate.... what do you think?" -- That would be a good safeguard to avoid failures causes by an incorrect "activation environment". Still, throwing exceptions at activation time is a good indicator that something is wrong, but it won't help you spinning up a working host. Also, once a "wong host" is running, you'll have problems getting the custom host to run properly. Therefore, disabling or "run as NT services" is the better idea

Summary: You can host COM+ server applications in a custom host process, with all features. In fact, the host process doesn't need to be fully dedicated to hosting the COM+ app. You just need to spin up one thread for CoRegisterSurrogateEx (which in turn will spawn all thread required for the COM+ app), while other threads can do other things.  


Custom Surrogate. Well, the use of CoRegisterSurrogateEx() turned out to be easier than I expected. I buildt a small managed sample to... [Commonality]

A few comments on using CoRegisterSurrogateEx.

CoRegisterSurrogateEx expects the application id of an Enterprise Services/COM+ package. It "consumes" the calling thread and sets up the COM+ thread pools, etc. on top of the calling host.

Using CoRegisterSurrogateEx causes your server apps to behave the exact same way as if they were hosted by dllhost.exe. Dllhost.exe is a very thin wrapper around this exact API. If you launching the app from within managed code (as Tomas does it), the activated serviced components will be activated in the default domain of the host and free them from the "GAC ghetto".

Bad news: COM+ will ALWAYS start packages on top of dllhost.exe if an activation occurs and the package isn't running. That means that your custom host must be running before any activation takes place. The best ways to guarantee this are only available on COM+ 1.5:

(a) Disable/Enable the application when your own host spins up and shuts down. That requires that the custom host runs in the context of a principal with write access to the COM+ catalog.

(b) Host the application inside a managed Windows Service, register the COM+ app to "run as NT service" and patch the registry (yes, you heard right) for that service so that the service host is your own host and not dllhost.exe. The registry value to patch is under "SYSTEM\\CurrentControlSet\\Services\\{ServiceName}" and there the ImagePath" value. That must be set to your exe instead of dllhost.exe. The service must be set up to match the logon identity of the COM+ app. It can't be "interactive user", obviously.

If you grab my esutilities and try the EnterpriseServicesApplicationInstaller setting its "UseCustomSurrogateService" property to true, it'll dump a service process exe right next to the registered assembly and do the necessary registry patches. (Source code for some of these things isn't available but will be available in the forseeable future.)

On Windows 2000, you will probably have to register/unregister the whole COM+ app at startup/spindown as Disable/Enable isn't available there. Due to the auto-registration features of Enterprise Services that isn't really as bad as it sounds -- it's still bad, but not SO bad.



Blogging from TornadoCamp.net (just finished the introductory keynote -- CLR; JIT, GC, etc.)

A few comments on Tomas' wishlist...

  • As of COM+ 1.5, you have full control over the process lifetime. You can disable/enable, pause/resume, auto-recycle, be notified of spin-up and shutdown and you can host your own process with CoRegisterSurrogateEx()
  • As of Win2K SP3 and better, you can define a single port for each DCOM endpoint (per process)
  • For a release from the dllhost.exe ghetto (actually from the default domain) and therefore for elimination the need for the GAC, consider my "ServicedComponentEx" hack from this source code archive. (This version only works for server-activated components)
  • The archive also has a managed catalog wrapper :)
  • I am told that there's indeed a somewhat public load balancing hook in COM+ which would allow you to write your own Application Center if you wanted. I haven't really looked at that yet (I am betting that this is something to look at), but I have no reason to doubt that the source of this information isn't telling me the right thing.

In general, I would say, however, that good use-cases for CLB are fairly rare. Only if you are seeing a huge variation in processing times (eg. user-supplied ad-hoc queries against a data store), CLB is a better tool than load-balancing on the user-tier and bind groups of user-tier servers to dedicated backend servers.


November 24, 2002
@ 10:49 PM

Mission accomplished. :-D

Ingo Rammer is now an Enterprise Services guy. We had a very interesting discussion at a conference last week and I explained some additional Enterprise Services details to Ingo for which I had no room in my book. We also talked a bit about the results of my analysis of the COM+ patent and I guess that may have sparked Ingo's wish to be able to get at those internals and to be able to extend the "unmanaged context". My impression is that COM+ may be a bit too far down the road in it's life-cycle for Microsoft to make such "extensibility for everyone" happen there, but that the general interest in AOP and the various extensibility points in managed code today seem already to be hinting at a more extensible architecture for whatever type of renovated/rebuilt/new services infrastructure they may come out with tomorrow.


Public revenge. #1 in Germany's mainstream music-charts is "Der Steuersong", performed by an imitator of Gerhard Schröder (Video links 1 2). "You elected me, now you won't get rid of me, we'll raise the taxes and get all money out of your pockets that we can." The Schröder administration will raise taxes and compulsory social insurances fees across the board to counter a "sudden" dramatic drop in tax income for 2002/2003 (>€30bn) that they claim didn't know about before this year's elections. And that in one of the worst economical situations in German post-war history. There's going to be a parliamentary commission investigating whether the administration intentionally lied to the people. All that on top of the administration's embarrassing foreign policy and diplomacy. Many people regret their votes. Better start thinking earlier. Thanks :(


Power = Work / Time

.NET Remoting performs as well as or better than DCOM? Binary performs better than XML? A Porsche 911 performs better than a Freightliner Truck? Yes. No.

"Performance" is an abused term. It's too often used as a synonym for "speed" and mostly in a completely unqualified and unquantified context.

Performance belongs on the left side of  power = work/time. In contrast, "speed" is simply operations/time. Difference: "operations" is about crunching machine code instructions, "work" is about handling application features.

To execute a remote call, .NET Remoting may be doing just as well (or a tiny bit better) than COM-transport-tunneled serviced components in absolute time -- but there's a lot less work being done by Remoting: context propagation, authentication, authorization, signature, encryption, etc. are things that COM does on top of what Remoting does.

Conversations using protocols that carry binary data are faster than conversations carrying XML in absolute time -- but there's a lot less work being done and the benefit of that work are interoperability, extensibility and enabling of the virtualization of system and network services.


Here's a reminder to get your disaster recovery plans up to date and not to keep all backups on site. Very sad. University of Twente NOC Destroyed [Slashdot


So, Microsoft bans modified XBoxes from XBox Live. I may be alone, but I honestly think that's a reasonable move in the favor of -- plain and simple -- gameplay. The whole online "play with random folks on the Internet" is only fun as long noone is cheating. Mod your box as you wish, but don't ruin my game-night with using an "invulnerability hack". I think that putting mod-chips into the XBox certainly isn't an evil act as such (if I wish I can just as much gut the box and turn it into a cat toilet), but the consequence of being able to mess around with the games and spoil other's fun by being unfair in the online-game certainly is.

The most interesting aspect of service-oriented architectures is that they have potentially unlimited nesting. A full-blown SOA solution is just a simple service to others.

Fast ausverkauft. Nur noch ein Platz frei! It's going to be a fun week, next week. Maybe we'll do one in English soon ;)


November 20, 2002
@ 04:24 PM

Is WSDL too hard?. In response to Greg Reinacker's comments I didn't say WSDL is hard, I said its cumbersome and unproductive. Come on, its just angle brackets how hard can it be ?? [Simon Fell]

Hard or not hard -- can we agree on "It's just not enough" ? :) My main problem with WSDL is that it tries to do 2 things (message contract and transport mapping), while it should do 3 things (message contract, service contract and transport mapping), however at the same time, one thing (WSDL) shouldn't do all these 3 things altogether but leave them to 3 separate things: A message contract definition language (defines soap:Body content), a service contract definition language (soap:Header) and a "web services binding language" that maps messages combined with services to transports.



WS-Security and SAML got this year's PC Magazine technical excellence award in the Protocols category. Congratulations to the authors. Cool. (Did I say "draft standards" anywhere here ?)

It's not easy to read, it's certainly not written to entertain, but still one of the most important pieces of information on COM+ out there: U.S. Patent 6,422,620. PDF browser at espacenet , image and full-text version (you want to look at the text version first) at USPTO.

The patent explains how COM+ works internally -- how stuff gets activated, how policies provide extensibility points, how contexts are built and how context propagation works. The filing of this patent was a long while ago (Aug 17,1998), but the document was only published by the USPTO three months ago and although in XML times it may seem like anything 1998 must be outdated, this stuff describes quite well what's happening inside any copy of Win2K and up. Reminder: It's not a "how to" guide for hooking your own stuff into COM+, but allows you to understand what they've done -- reading this it is also a pretty complicated way to explain to oneself why WS-Coordination  is such a relevant WS spec.

Related: US6473791, US6301601, US6134594, US6014666, US5958004, US5890161,  US6425017   


Excuse me? Life? For hacking? So what penalty does one get who physically breaks into a doctor's office and steals a server hard-drive (along with backups) containing vital medical information? Death?

    Ouch!...House approves bill to make hacking automatic life sentence [Scott Hanselman's Weblog]



Sigh....Net Server: Three delays a charm? [Scott Hanselman's Weblog]

Translated into a bit of my world: The server-version of COM+ 1.5 now ships in April 2003. Sigh!


The event I've been waiting all weekend to announce: Everett is out!
Visual Studio .NET 2003 Final Beta is here:
For MSDN members only:

download: http://msdn.microsoft.com/subscriptions/resources/subdwnld.asp
site: http://msdn.microsoft.com/vstudio/productinfo/vstudio03/

[Sam Gentile's Weblog]

.... that event is also an event that's releasing me of yet another NDA. It's a "freedom of speech" event. Celebrate!

There's tons of cool new things in Everett, but don't look for the next wave of revolutions. Everett comes with very many little improvements here and there, some needed, some nice to have, but no huge new chunks of functionality -- MS simply made a good thing better and that's perfectly cool this time around :)

One of the little things that I really like is that in C#, typing "override<space>" inside a class-body will bring up IntelliSense with choices from the base class. Once you select a method to override, IntelliSense will give you a default implementation for the method that calls the base-class. Pretty.


November 20, 2002
@ 01:39 AM
Architect's Forum, Oslo (Dec. 9-10) is the first stop on the tour. This is the 4th time I am going to be in Norway this year and I am always happy to go back -- Norway is a great country -- it'd be at a "fantastic country" if a beer (in words: one) wouldn't cost at least €7.50.  

Benelux, mark your calendars for Feb 18-19: Developer Days 2003. I will be doing a rerun of my Web Services DevCon talk about how to extend ASP.NET Web Services with custom extensions and I am honored to have been invited to do one of the keynotes, which will, among other things, highlight and (maybe) prove that "Enterprise Services" (COM+ if you're of the old-fashined type) is now more than ever heart and soul of scalable, robust and secure .NET server applications.

Pretty much 100% of what I am working on right now is covered by some NDA and still keeps me very busy at the same time. Funny, how one starts to feeling "guilty" about not blogging for a while. With the number of people having linked here and/or are visiting frequently, keeping the blog going really is like "customer service" -- in a good sense. 

However, there's light at the end of this tunnel. I am preparing for a speaking tour throughout Europe which will kick off in Norway next month and will continue through 10 more countries from January to April '03. I'll talk about "service-oriented architectures" and "aspect-oriented programming/metadata-driven architectures" on this tour - as usual there's going to be plenty of "demo-code fallout" from brand-new talks, which I'll post around here in the upcoming weeks.

And now for something completely different: http://www.somethingawful.com/photoshop/  ;)


November 15, 2002
@ 04:56 PM

Little known COM feature: CoGetInterceptor. This function provides you with a universal interception mechanism that lets you dynamically inspect all aspects of a call and that feels a lot like .NET Remoting Context interception sinks (which unfortunately went from documented to "internal only" in .NET FW RTM)

I don't have the cycles right now to provide an isolated sample or an in-depth explanation, but it works something  like that:

IUnknown * pItfToBeIntercepted;
ICallInterceptor * pInterceptor;

// ... get pItfToBeIntercepted from somewhere

MyEventHandler * myEventHandler = new MyEventHandler( pItfToBeIntercepted ); // implements ICallFrameEvents
CoGetInterceptor(iidToBeIntercepted, NULL, IID_IContextInterceptor, (void**)&pInterceptor); // get interceptor
pInterceptor->RegisterSink( myEventHandler ); // register

with myEventHandler being an instance of a class that implements ICallFrameEvents. That interface has a method OnCall that gives you the ICallFrame info. You forward the call to the actual target object using ICallFrame::Invoke or you can just consume the call right there and not forward.

To get between the caller and the object, you call QueryInterface for iidToBeIntercepted on pInterceptor and hand this reference to the client instead of the actual interface.The actual "inner" interface is wrapped by the class that handles ICallFrameEvents and which forwads the call to it inside OnCall using ICallFrame::Invoke. (as shown in the pseudo-code constructor above).

If the target object is aggregatable, you can do all of this in an outer QueryInterface, proxying each interface being asked for and therefore construct a fully transparent interception layer.


To everyone visiting this blog infrequently: The calendar on the right broke a few weeks ago. Please use this link to get to older content. Also, check my stories section.


Busy times. No time to blog in the past two weeks. We're doing a major redesign of the newtelligence website, which will finally be fully dynamic, web service enabled and the new home for my blog. I've written a news aggregation service using all the good stuff in the .NET Framework and a bunch of ASP.NET controls for this.

We're hopefully done with all of that by end of next month -- takes a long time because we're really busy with our regular work in between. Just got back from a week-long .NET workshop and will be going to another one next week, followed two weeks later by our open-for-everyone .NET seminar "TornadoCamp.NET". (All in German, still a few seats left).

[Yes, TornadoCamp.NET might sound like a silly name, but it works as an expressive anglizism for the German market]