Javier Gonzalez sent me a mail today on my most recent SOA post and says that it resonates with his experience:

I just read your article about services and find it very interesting. I have been using OOP languages to build somewhat complex systems for the last 5 years and even if I have had some degree of success with them, I usually find myself facing those same problems u mention (why, for instance, do I have to throw an exception to a module that doesn't know how to deal with it?). Yes, objects in a well designed OOP systems are *supposed* to be loosely coupled, but then, is that really possible to completely achieve? So I do agree with u SOA might be a solution to some of my nightmares. Only one thing bothers me, and that is service implementation. Services, and most of all Web Services only care about interfaces, or better yet, contracts, but the functionality that those contracts provide have to be implemented in some way, right? Being as I am an "object fan" I would use an OO language, but I would like to hear your opinions on the subject. Also, there's something I call "service feasibility". Web Services and SOA in general do "sound" a very nice idea, but then, on real systems they tend to be sluggish, to say the least. They can put a network on its knees if the amount of information transmitted is only fair. SAOP is a very nice idea when it comes to interoperability, but the messages are *bloated* and the system's performance tend to suffer. -- I'd love to hear your opinions on this topics.

Here’s my reply to Javier:

Within a service, OOP stays as much of a good idea as it always was, because it gives us all the qualities of pre-built infrastructure reuse that we've learned to appreciate in recent years. I don't see much realistic potential for business logic or business object reuse, but OOP as a tool is well and alive.

Your point about services being sluggish has some truth to it, if you look at system components singularly. There is no doubt that a Porsche 911 is faster than a Ford Focus. However, if you look at a larger system as a whole, to stay in the picture let's take a bridge crossing a river at rush hour, the Focus and the 911 move at the same speed because of congestion -- a congestion that would occur even if everyone driving on that bridge were driving a 911. The primary goal is thus to make that bridge wider and not to give everyone a Porsche.

Maximizing throughput always tops optimizing raw performance. The idea of SOA in conjunction with autonomous computing networks decouples subsystems in a way that you get largely independent processing islands connected by one-way roads to which you can add arbitrary numbers of lanes (and arbitrary number of identical islands). So while an individual operation may indeed take a bit longer and the bandwidth requirements may be higher, the overall system can scale its capacity and throughput to infinity.

Still, for a quick reality check: Have you looked at what size packages IIOP or DCOM produce on the wire and at the number of network roundtrips they require for protocol negotiation? The scary thing about SOAP is that it is really very much in our face and relatively easy to comprehend. Thus people tend to pay more attention to it. If you compare common binary protocols to SOAP (considering a realistic mix of payloads), SOAP doesn't look all that horrible. Also, XML compresses really well and much better than binary data. All that being said, I know that the vendors (specifically Microsoft) are looking very closely at how to reduce the wire footprint of SOAP and I expect them to come around with proposals in a not too distant future.

Over in the comment view of that article, Stu Charlton raises some concerns and posts some questions. Here are some answers:

1) "No shared application state, everything must be passed through messages."  Every "service" oriented system I have ever witnessed has stated this as a goal, and eventually someone got sick of it and implemented a form of shared state. The GIT in COM, session variables in PL/SQL packages, ASP[.NET] Sessions, JSP HttpSession, common areas in CICS, Linda/JavaSpaces, Stateful Session Beans, Scratchpads / Blackboards, etc. Concern: No distributed computing paradigm has ever eliminated transient shared state, no matter how messy or unscalable it is.

Sessions are scoped to a conversation; what I mean is application-scoped state shared across sessions. Some of the examples you give are about session state, some are about application state. Session state can’t be avoided (although it can sometimes be piggybacked into the message flow) and is owned by a particular service. If you’ve started a conversation with a service, you need to go back to that service to continue the conversation. If the service itself is implemented using a local (load balance and/or failover) cluster that’s great, but you shouldn’t need to know about it. Application state that’s shared between multiple services provided by an application leads to co-location assumptions and is therefore bad.

2) "A customer record isn't uniquely identifiable in-memory and even not an addressable on-disk entity that's known throughout the system"  -- Question: This confuses me quite a bit. Are you advocating the abolishment of a primary key for a piece of shared data? If not, what do you mean by this: no notion of global object identity (fair), or something else?

I am saying that not all data can and should be treated alike. There is shared data whose realistic frequency of change is so low, that it simply doesn’t deserve uniqueness (and be identified by a primary key in a central store). There is shared data for which a master copy exists, but of which many concurrent on-disk replicas and in-memory copies may safely float throughout the system as long as there is understanding about the temporal accuracy requirements as well as about the potential for concurrent modification. While there is always a theoretical potential for concurrent data modification, the reality of many systems is that a records in many tables can and will never be concurrently accessed, because the information causing the change does not surface at two places at the same time. How many call center agents will realistically attempt to change a single customer’s address information at the same time? Lastly, there is data that should only be touched within a transaction and can and may only exist in a single place.

I am not abandoning the idea of “primary key” or a unique customer number. I am saying that reflecting that uniqueness in in-memory state is rarely the right choice and rarely worth the hassle. Concurrent modification of data is rare and there are techniques to eliminate it in many cases and by introduction of chronologies. Even if you are booking into a financial account, you are just adding information to a uniquely identifiable set of data. You are not modifying the account itself, but you add information to it. Counter example: If you have an object that represents a physical device such as a printer, a sensor, a network switch or a manufacturing robot, in-memory identity immediately reflects the identity of the physical entity you are dealing with. These are cases where objects and object identity make sense. That direct correspondence rarely exists in business systems. Those deal with data about things, not things.

3) "In a services world, there are no objects, just data". – […] Anyway, I don't think anyone [sane] has advocated building fine-grained object model distributed systems for quite a few years. […] But the object oriented community has known that for quite some time, hence the "Facade" pattern, and the packaging/reuse principles from folks such as Robert C. Martin. Domain models may still exist in the implementation of the service, depending on the complexity of the service.

OOP is great for the inner implementation of a service (see above) and I am in line with you here. There, however, plenty of people who still believe in object purity and that’s why I am saying what I am saying.

4) "data record stored & retrieved from many different data sources within the same application out of a variety of motivations"  --- I assume all of these copies of data are read-only, with one service having responsibility for updates. I also assume you mean that some form of optimistic conflict checking would be involved to ensure no lost updates. Concern: Traditionally we have had serializable transaction isolation to protect us from concurrent anomalies. Will we still have this sort of isolation in the face of multiple cached copies across web services?

I think that absolute temporal accuracy is severely overrated and is more an engineering obsession than anything else. Amazon.com basically lies into the faces of millions of users each day by saying “only 2-4 items left in stock” or “Usually ships within 24 hours”. Can they give you to-the-second accurate information from their backend warehouse? Of course they don’t. They won’t even tell you when your stuff ships when you’re through checkout and gave them you money. They’ll do so later – by email.

I also think that the risk of concurrent updates to records is – as outlined above – very low if you segment your data along the lines of the business use cases and not so much along the lines of what a DBA thinks is perfect form.

I’ll skip 5) and 6) (the answers are “Ok” and “If you want to see it that way”) and move on to
7) "Problematic assumptions regarding single databases vs. parallel databases for scalability" -- I'm not sure what the problem is here from an SOA perspective? Isn't this a physical data architecture issue, something encapsulated by your database's interface? As far as I know it's pretty transparent to me if Oracle decides to use a parallel query, unless I dig into the SQL plan. […]

“which may or may not be directly supported by your database system” is the half sentence to consider here as well. The Oracle cluster does it, SQL Server does it too, but there are other database system out there and there’s also other ways of storing and accessing data than RDBMS.

8) "Strong contracts eliminate "illegal argument" errors" Question: What about semantic constraints? Or referential integrity constraints? XML Schemas are richer than IDL, but they still don't capture rich semantic constraints (i.e. "book a room in this hotel, ensuring there are no overlapping reservations" -- or "employee reporting relationships must be hierarchical"). […]

“Book a room in this hotel” is a message to the service. The requirements-motivated answer to this message is either “yes” or “no”. “No overlapping reservations” is a local concern of that service and even “Sorry, we don’t know that hotel” is. The employee reporting relationships for a message relayed to an HR service can indeed be expressed by referential constraints in XSD, the validity of the merging the message into the backend store is an internal concern of the service. The answer is “can do that” or “can’t do that”.

What you won’t get are failures like “the employee name has more than 80 characters and we don’t know how to deal with that”. Stronger contracts and automatic enforcement of these contracts reduce the number of stupid errors, side-effects and the combination of stupid errors and side effects to look for – at either endpoint.

9) "The vision of Web services as an integration tool of global scale exhibits these and other constraints, making it necessary to enable asynchronous behavior and parallel processing as a core principle of mainstream application design and don’t leave that as a specialty to the high-performance and super-computing space."  -- Concern: Distributed/concurrent/parallel computing is hard. I haven't seen much evidence that SOA/ web services makes this any easier. It makes contracts easier, and distributing data types easier. But it's up to the programming model (.NET, J2EE, or something else) to make the distributed/concurrent/parallel model easier. There are some signs of improvement here, but I'm skeptical there will be anything that breaks this stuff into the "mainstream" (I guess it depends on what one defines as mainstream)...

Oh, I wouldn’t be too sure about that. There are lots of thing going on in that area that I know of but can’t talk about at present.

While SOA as a means of widespread systems integration is a solid idea, the dream of service-oriented "grid" computing isn't really economically viable unless the computation is very expensive. Co-locating processing & filtering as close as possible to the data source is still the key principle to an economic & performing system. (Jim Gray also has a recent paper on this on his website). Things like XQuery for integration and data federations (service oriented or not) still don't seem economically plausible until distributed query processors get a lot smarter and WAN costs go down.

Again, if the tools were up to speed, it would be economically feasible to do so. That’s going to be fixed. Even SOA based grids apparently sound much less like science fiction to me than to you.

Categories: Architecture | IT Strategy

September 29, 2003
@ 10:30 AM

I just deleted a „direct marketing message“ for a lottery with the subject line: “[SPAM] category B winner”. The fact that a spammer labels his spam as [SPAM] is either funny, fair or exhibits an unprecedented case of idiocy. I still have to make up my mind.

By the way, my little experiment started August 8th, where I set up an unmonitored and unused mail account just to see how much spam it attracts by just “being out there” is starting to yield the expected result.

(Before you follow the above links, I recommend that you have a virus scanner running that monitors your web traffic; the “spamthisaccount” page has literal copies of the e-mails, including attachments. We are scanning for viruses and right now there’s nothing harmful in the content directory, but you never know when it hits ….)

Categories: Other Stuff

Steve Swartz, who is one of my very good personal friends and who is, that “personal function” aside, Program Manager in Microsoft’s Indigo team and also was the lead architect for a lot of the new functionality that we got in the Windows Server 2003 version of Enterprise Services (COM+ 1.5 for the old fashioned folks), wrote a comment on my previous post on this topic, where I explained how you can get the XML configuration story of the Framework to work with Enterprise Services using the .NET Framework 1.1.

In response to what I wrote, someone asked whether this would also work on Windows XP, because I was explicitly talking about Windows Server 2003. Steve’s answer to that question completes the picture and therefore it shouldn’t be buried in the comments. Steve writes:

In fact, this will work on XP and Windows Server 2003 so long as you have NETFX 1.1. The field has been there since XP; in NETFX 1.1, we set the current directory for the managed app domain.

This field was originally added to configure unmanaged fusion contexts. In that capacity, the field works with library and server apps alike. In its capacity as a setter of current appdomain directory, it works less well with library apps (natch).

Categories: Enterprise Services

Go here and read what Matt Davis at the Cognition and Brain Sciences Unit, in Cambridge, UK has to say says about one of the current "cool quotes" in blog space.

It's a very interesting read from someone who can explain that the following paragraph (that has been replicated across hundreds of blogs in the last two weeks and for which the source doesn't seem to be really known) isn't really accurate in what it's saying:

Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a total mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.

Categories: Other Stuff

A long while back, I wrote about a hack to fix the dllhost.exe.config dilemma of Enterprise Services. That hack no longer works due to changes in the Framework, but the good news is there is an “official” and very easy solution for this now. Unfortunately there is no documentation on this (or at least it’s not easy enough to find that I could locate it) and Google only yields a few hints if you know exactly what you are looking for. So, index this, Google!

What I call the “config dilemma” of Enterprise Services is that because all out-of-process ES applications are executed using the surrogate process provided by %SystemRoot%\System32\dllhost.exe and the runtime is loaded into that process, the default application configuration file is dllhost.exe.config, must reside just next to dllhost.exe (in System32) and is therefore shared across all out-of-process Enterprise Services applications.

That makes using the XML configuration infrastructure for Enterprise Services very unattractive, to say the least.

Now, with COM+ 1.5 (Windows Server 2003) and the .NET Framework 1.1, things did change in a big way.

To use per-application application configuration files, all you have to do is to create an (possibly otherwise empty) “application root directory” for your application in which you place two files: An application.manifest file (that exact name) and an application.config file. Once your application is registered (lazy or using the RegistrationHelper class or through regsvcs.exe), you will have to configure the application’s root directory in the catalog – that can be done either programmatically using the catalog admin API (ApplicationDirectory property) or through the Component Services explorer as shown above.

The picture shows that the example that you can download using the link below is installed at “c:\Development\ES\ConfigTest\ESTest” on my machine and has these two said files sitting right there.

The application.manifest file content is embarrassingly simple

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
</assembly>

and the application.config isn’t complicated either:

<?xml version="1.0"?>
<
configuration>
  <appSettings>
     <add key="configBit" value="This rocks!"/>
  </appSettings>
</
configuration>

These two files, placed into the same directory and properly configured as shown in the above picture, let this class

       public class SimpleComponent : ServicedComponent
       {
        public string GetConfigBit()
        {
            return ConfigurationSettings.AppSettings["configBit"];
        }
       }

yield the expected result for GetConfigBit(): “This rocks!”

 

 

Download: ESTest.zip

Categories: CLR | Enterprise Services

September 26, 2003
@ 05:59 AM

Lots of PDC hype these days. Here's a piece an Avalon by Wesner Moise that's still leaving quite a bit in the fog.

My translation of what I am reading from the abstracts is:

Imagine Microsoft would drop the entire USER32 subsystem of Windows and replace it with a brand-spanking-new windowing and I/O engine and a fully object-oriented, managed API, finally doing the long-overdue overhaul and replacement of the foundation of the Windows UI technologies that have, in essence, been with us since Windows 1.0.

.... and create a WOW ("Windows on Windows") subsystem layer, not dissimilar to what we saw in NT for Win16 apps, to support existing apps.

Categories: PDC 03 | CLR

I am in a blogging mood today … Here are some thoughts around composite metadata. Sorry for the bold title ;)

* * *

Whenever I am asked what I consider the most important innovation of the CLR, I don’t hesitate to respond “extensible metadata” coming in the form of custom attributes. Everyone who has followed this blog for a while and looked at some of the source code I published knows that I am seriously in love with attributes. In fact, very few of the projects I write don’t include at least one class derived from Attribute and once you use the XmlSerializer, Enterprise Services or ASMX, there’s no way around using them.

In my keynote on contracts and metadata at the Norwegian Visual Studio .NET 2003 launch earlier this year, I used the sample that’s attached at the bottom of this article. It illustrates how contracts can be enforced by both, schema validation and validation of object graphs based on the same set of constraints. In schema, the constraints are defined using metadata (restrictions) inside element or type definitions, and in classes, the very same restrictions can be applied using custom attributes, given you have a sufficient set of attributes and the respective validation logic. In both cases, the data is run through a filter that’s driven by the metadata information. If either filter is used at the inbound and outbound channels of a service, contract enforcement is automatic and “contract trust” between services, as defined in my previous article, can be achieved. So far, so good.

In my example, the metadata instrumentation for a CLR type looks like this:

      [System.Xml.Serialization.XmlTypeAttribute(
           Namespace="urn:schemas-newtelligence-com:transactionsamples:customerdata:v1")]
       public class addressType
       {
              [Match(@"\p{L}[\p{L}\p{P}0-9\s]*"),MaxLength(80)]
              public string City;
              public countryNameType Country;
              public countryCodeType CountryCode;
              [MaxLength(10)]
              public string PostalCode;
              [MaxLength(160)]
              public string AddressLine;
      }

… while the corresponding schema is a bit better factored and looks like this:

    <xsd:simpleType name="nameType">
              <xsd:restriction base="xsd:string">
                     <xsd:pattern value="\p{L}[\p{L}\p{P}0-9\s]*" />
              </xsd:restriction>
    </xsd:simpleType>
    <
xsd:complexType name="addressType">
              <xsd:sequence>
                     <xsd:element name="City">
                            <xsd:simpleType>
                                   <xsd:restriction base="nameType">
                                          <xsd:maxLength value="80" />
                                   </xsd:restriction>
                            </xsd:simpleType>
                     </xsd:element>
                     <xsd:element name="Country" type="countryNameType" />
                     <xsd:element name="CountryCode" type="countryCodeType" />
                     <xsd:element name="PostalCode">
                            <xsd:simpleType>
                                   <xsd:restriction base="xsd:string">
                                          <xsd:maxLength value="10" />
                                   </xsd:restriction>
                            </xsd:simpleType>
                     </xsd:element>
                     <xsd:element name="AddressLine">
                            <xsd:simpleType>
                                   <xsd:restriction base="xsd:string">
                                          <xsd:maxLength value="160" />
                                   </xsd:restriction>
                            </xsd:simpleType>
                     </xsd:element>
              </xsd:sequence>
       </xsd:complexType>

The restrictions are expressed differently, but they are aspects of type in both cases and semantically identical. And both cases work and even the regular expressions are identical. All the sexiness of this example aside, there’s one thing that bugs me:

In XSD, I can create a new simple type by extending a base type with additional metadata like this

<xsd:simpleType name="nameType">
       <xsd:restriction base="xsd:string">
              <xsd:pattern value="\p{L}[\p{L}\p{P}0-9\s]*" />
       </xsd:restriction>
</xsd:simpleType>

which causes the metadata to be inherited by the subsequent element definition that again uses metadata to further augment the type definition with metadata rules:

<xsd:element name="City">
       <xsd:simpleType>
              <xsd:restriction base="nameType">
                     <xsd:maxLength value="80" />
              </xsd:restriction>
       </xsd:simpleType>
</xsd:element>

So, XSD knows how to do metadata inheritance on simple types. The basic storage type (xsd:string) isn’t changed by this augmentation, it’s just the validation rules that change, expressed by adding metadata to the type. The problem is that the CLR model isn’t directly compatible with this. You can’t derive from any of the simple types and therefore you can’t project this schema directly onto a CLR type definition. Therefore I will have to apply the metadata onto every field/property, which is the equivalent of the XSD’s element declaration. The luxury of the <xsd:simpleType/> definition and inheritable metadata doesn’t exist. Or does it?

Well, using the following pattern it indeed can. Almost.

Let’s forget for a little moment that the nameType simple type definition above is a restriction of xsd:string, but let’s focus on what it really does for us. It encapsulates metadata. When we inherit that into the City element, an additional metadata item is added, resulting in a metadata composite of two rules – applied to the base type xsd:string.

So the about equivalent of this expressed in CLR terms could look like this:

    [AttributeUsage(AttributeTargets.Field)]
    [Match(@"\p{L}[\p{L}\p{P}0-9\s]+")]
    public class NameTypeStringAttribute : Attribute
    {
    }

    [System.Xml.Serialization.XmlTypeAttribute(
       Namespace="urn:schemas-newtelligence-com:transactionsamples:customerdata:v1")]
    public class addressType
    {
        [NameTypeString,MaxLength(80)]
        public string City;
       
        …
    }

Now we have an attribute NameTypeString(Attribute) that fulfills the same metadata containment function. The attribute has an attribute. In fact, we could even go further with this and introduce a dedicated “CityString” meta-type either by composition:

   [AttributeUsage(AttributeTargets.Field)]
   [NameTypeString,MaxLength(80)]

      public class CityStringAttribute : Attribute
    {

    }

 … or by inheritance

    [AttributeUsage(AttributeTargets.Field)]
    [MaxLength(80)]

       public class CityStringAttribute : NameTypeStringAttribute
    {
    }

Resulting in the simple field declaration

    [CityString] public string City;

The declaration essentially tells us “stored as a string, following the contract rules as defined in the composite metadata of [CityString]”.

Having that, there is one thing that’s still missing. How does the infrastructure tell if an attribute is indeed a composite and that the applicable set of metadata is a combination of all attributes found on this attribute and attributes that are declared on itself?

The answer is the following innocent looking marker interface:

    public interface ICompositeAttribute
    {  }

If that marker interface is found on an attribute, the attribute is considered a composite attribute and the infrastructure must (potentially recursively) consider attributes defined on this attribute in the same way as attributes that exist on the originally inspected element – for instance, a field.

    [AttributeUsage(AttributeTargets.Field)]
    [Match(@"\p{L}[\p{L}\p{P}0-9\s]+")]
    public class NameTypeStringAttribute : Attribute, ICompositeAttribute
    {   }

Why a marker interface and not just another attribute on the attribute? The answer is quite simple: Convenience. Using the marker interface, you can find composites simply with the following expression: *.GetCustomAttributes(typeof(ICompositeAttribute),true)

And why not use a base-class “CompositeAttribute”? Because that would be an unnecessary restriction for the composition of attributes. If only the marker interface is used, the composite can have any base attribute class, including those built into the system.

But wait, this is just one side of the composition story for attributes. There’s already a hint on an additional composition quality two short paragraphs up: *.GetCustomAttributes(typeof(ICompositeAttribute),true). The metadata search algorithm doesn’t only look for concrete attribute types, but also looks for interfaces, allowing the above expression to work.

So how would it be if an infrastructure like Enterprise Services would not use concrete attributes, but would also support composable attributes as illustrated here …

    public interface ITransactionAttribute
    {
        public TransactionOption TransactionOption
        {
            get;
        }
    }

    public interface IObjectPoolingAttribute
    {
        public int MinPoolSize
        {
            get;
        }

        public int MaxPoolSize
        {
            get;
        }
    }

 

In that case, you would also be able to define composite attributes that define standardized behavior for a certain class of ServicedComponents that you have in your application and should all behave in a similar way, resulting in a declaration like this:

      public class StandardTransactionalPooledAttribute :
        Attribute, ITransactionAttribute, IObjectPoolingAttribute
    {
    }

      [StandardTransactionalPooled]
    public class MyComponent : ServiceComponent
    {

    }

While it seems to be an “either/or” thing at first, both illustrated composition patterns, the one using ICompositeAttribute and the other that’s entirely based on the inherent composition qualities of interface are useful. If you want to reuse a set of pre-built attributes like the ones that I am using to implement the constraints, the marker interface solution is very cheap, because the coding effort is minimal. If you are writing a larger infrastructure and want to allow your users more control over what attributes do and allow them to provide their own implementation, “interface-based attributes” may be a better choice.

Download: MetadataTester.zip

Categories: Architecture | CLR

September 25, 2003
@ 03:27 PM
If you are a developer and don't live in the Netherlands (where SOA stands, well known, for "Sexueel Overdraagbare Aandoeningen" = "Sexually transmitted diseases”), you may have heard by now that SOA stands for "service oriented architectures". In this article I am thinking aloud about what "services" mean in SOA.
Categories: Architecture | IT Strategy

September 25, 2003
@ 07:20 AM

I just uploaded a new build of dasBlog (1.3.3266) to the GotDotNet workspace and the dasBlog site. Although it carries a new minor version number, the changes aren't dramatic and the new build mostly consolidates some cosmetic changes, add-ons (languages) and fixes that were done by the folks in the GotDotNet workspace and myself.

Categories: dasBlog

September 24, 2003
@ 05:58 AM

The Register reports that MSN is killing its open, unmointored chat rooms, except for MSN broadband subscribers in order to protect children from abuse. I think that's sad for lots of people who love to chat (and I remember how much of a chat addict I was back in '94/'95), but since those %&$&# apparently can't be stopped unless you take away their ability to communicate, it's probably a good thing. 

It's another example that the era of "free" and "anonymous" on the Internet will and must end at some point. In the long run, the Internet can't remain a lawless and anonymous space, but people will have to be held accountable for what they do. Unless that's understood and accepted by the Internet user community at large, we won't have proper protection from criminals, proper protection against spam, viruses, trojans and worms and appropriate security. In the end, security is not a function of software, but it's a function of administration. Protection from attacks against me, whether digitally or physically, is more important to me than the ability to roam around without being seen and known. You can't make yourself invisible in real life, either.

Categories: Other Stuff

I am investigating a problem that occurs very rarely in dasBlog and causes rendering of pages to fail consistently until restarted by, for instance, touching web.config. I've seen this happen only on the newtelligence website and only once.

Apparently, the resource manager that's used to pull in the localized strings for various elements and is stored in application state is lost at some point and causes the data binding of the controls to throw a NullReferenceException. What I assume is that the Global.Application_Start() event isn't fired in all cases when the worker process recycles.

I am looking into it, but before I have a good answer for the why and a fix, the best workaround is to touch (load and save) web.config in order to restart the app.

Update: While I still don't know why the original problem happened, I have a permanent workaround for 1.3, which is due very very soon (I am already posting using the 1.3 build that consolidates a couple of fixes, language additions and minor enhancements)

Categories: dasBlog

Bob Cancilla’s CNet article is so full of FUD that I can’t help but making a few more comments and post a few questions. Unfortunately, his email address isn’t mentioned near the article … therefore I have to blog it. Mr. Cancilla, feel free to use the comments feature here, if you find this…

Unlike IBM, Microsoft falls short when it comes to helping customers use standards in a productive, cost-effective way. […] Sure, both companies have worked closely to develop and promote a sizable number of important industry standards that will continue to have a big impact on the way business is conducted in the foreseeable future. But cool specs are meaningless to the IT people who must actually assemble all those standards into real business solutions. That's where the rubber meets the road for Web services. Redmond's approach to Web services is a dead-end of closed, Windows-only systems that lock customers into a single computing model. Customers don't have the freedom to choose the best hardware or operating system. Where does that leave the millions of users who rely on non-Microsoft platforms such as mainframes, Unix or Linux?

First of all, Mr. Cancilla, you haven’t understood Web Services, at all. Web Services are about connecting systems, irrespective of operating system, application platform or programming model. Redmond’s approach to web services is just like IBM’s and BEA’s and Sun’s and Oracle’s approach to Web Services. All of them think they have a superior application platform and their embrace of Web Services serves to make that platform the hub of communication for their and all other systems that are (for them: unfortunately) running on other platforms in the reality of today’s heterogeneous IT landscape. It’s about opening services for access by other platforms. I wish I would know how you get the idea that “lock-in” and “Web Services” belong in the same sentence?

Secondly, show me an environment that enables the average programmer to be more productive and hence more cost effective when developing XML and Web Services solutions than Microsoft’s Visual Studio .NET – to a degree that it backs up your “falls short” claim.

Third, I wonder how someone who has dedicated his career to one of the most monopolistic, locked-down and proprietary platforms in existence, that is IBM’s midrange and mainframe platforms, feels qualified to discredit Microsoft for their platform strategy. In fact, I can run Windows on pretty much any AMD and Intel-based server or desktop from any vendor – how’s that with your AS/400 and mainframe apps?

Ultimately, .Net defeats the purpose of open standards because Microsoft products are open only as long as you develop applications on the Windows platform. To me, this doesn't say open, it says welcome to yet another Microsoft environment that is anything but open.

Likewise, IBM’s full Web Services stack is only open as long as you write applications for their WebSphere environment. WebSphere is IBM’s application server and Microsoft’s application server is Windows Server 2003. Every vendor who makes money from software tries to build a superior platform, resulting in features that aren’t covered by standards and therefore cause vendor lock-in. That’s a direct result from market economy. However, this still doesn’t have anything to do with Web Services, because these are “on-the-wire” XML message exchange standards that primarily exist for the purpose of cross-platform interaction.

Proprietary environments deny businesses the flexibility to chose best-of-breed solutions that are fine-tuned to their industry's unique environment.

… like OS/400 and OS/390 ?

Additionally, Microsoft's claim that .Net's Web services platform saves customers money is misleading. Sure, the initial investment is enticing, but how much will it cost when the hard work begins? A recent Gartner report said companies planning to move their old programs to .Net can expect to pay 40 percent to 60 percent of the cost of developing the programs in the first place.

A recent discussion with my 9 year old niece has shown that moving “old programs” from anywhere to anywhere isn’t free and that anyone who’d make that claim shouldn’t be working in this industry.

Building your company's Web services platform on .Net is fine if you don't mind throwing away decades of investment in existing applications. For instance, on any given day, businesses use CICS systems to process about 30 billion transactions, to the tune of $1 trillion. They can't afford to rip out that kind of processing power. Instead, they're looking for ways to exploit it within other applications. But if they were to buy into .Net, they'd better be prepared to stack it on the shelf because Microsoft's Host Integration Server provides limited access to CICS on mainframes.

Ok … here we have it. CICS is a lock-in, proprietary IBM product, right? So, what’s better than Host Integration Server? I suspect it’s an IBM product, correct? So if you were to replace that all so powerful IBM mainframe with any other technology (including Linux), of course using a different approach to architecture (which is entirely possible), how would you have not to throw away that investment?

What seems to be promoted here is “stay with IBM , use their stack”. I have all respect for the power of the IBM mainframe platforms, but using “openness” as an argument in this context and for the conclusions the author is making, is nothing less than perverse.

Categories: IT Strategy

September 23, 2003
@ 08:05 AM

Here are the two PPT decks from yesterday's talks at the JAOO conference and a few notes...

Layers-Tiers-Aspects-CV-V2.ppt (1.24 MB):

This deck is about layers and tiers and highlights (well, the talk that goes along with the deck does) how I make a strict distinction between the term "layer" and "tier". "Layer" is about organizing code in order to make it more resilient against change in other layers and "tier" is about distributing layers across processes and machines and defining appropriate boundaries as well as selecting technologies to cross these boundaries. I am also advocating to generalize the "classic" 3-layer (not tier!) model of "presentation", "business logic", and "data access" and make the underlying idea a pervasive and recursive pattern for basically all code in a business app.

Any class and any module may have one or multiple "public interfaces" that may be mapped to several incoming channels bound to different technologies. The public interfaces themselves (this includes public methods of a plain class) don't implement any logic, but always delegate to a strictly private internal implementation. That implementation, in turn, will not talk to external resources and services directly, but bind to abstract interfaces and access them via factories. (I will explain this in more detail here when I can make the time to do so)

At JAOO, the short AOP section of this deck drew some furious comments from an attendee after the session, who said that I was totally wrong and the AOP worked brillantly as a general purpose programming paradigm. However, when talking to him for a while, he had to admit that he and the colleagues on his project are indeed carefully considering and defining aspect dependencies and he sort of acknowledged that while their set of aspects will work great in and by itself, but it would be hard to combine it with an arbitrary foreign set of aspects. My main takeaway from the discussion with him was, though, that (a) it's due time for Java (and C#) to get support for generics, because that may be a better tool for a couple of things he pointed out and (b) that if you give people a tool like AspectJ, they will just jump and reinvent the wheel. The aspects he said his team implemented were (in ES terms) Transactions, JITA, Tracing, Security, etc. All the usual suspects.

SOA-CV-V1-final.ppt (745.5 KB)

This deck is an updated version of the Service Oriented Architectures deck that I've been using for this year's Microsoft EMEA Architect's Tour. I've included a couple of new aspects, including a stronger endorsement of UDDI, an explanation of the relevance WS-Policy and WS-Addressing, a look at the relevance of WSDL in the presence policy and addressing and a reference (and two borrowed slides) to my friend Arvindra Sehmi's most excellent presentation  (free registration may be required) on autonomous computing and queing networks, which has become a very important part of the overall SOA story for me.

 

Categories: Talks | JAOO 2003

September 22, 2003
@ 05:39 PM

This article on news.com is just beyond belief. Mr. Cancilla, exactly how open is OS/400 as per your definition?

Categories: Other Stuff

I am sitting here right outside the conference venue of the JAOO Conference in Aarhus in Denmark, which kicks of the Fall/Winter 2003 conference season for me. I am speaking about Service Oriented Architectures and Web Services in my first talk and will drill down on Layers, Tiers, and Services in my second talk. Unfortunately the time slots are just 45 minutes and I just can't get myself to cut too much of the content .... as usual. Later in the week, I'll go to the BASTA! conference in Frankfurt where I won't speak, but want to check out how Jörg, Achim and Michael are doing and talk to a couple of folks there.

Anyways, after my vacation and a week of orientation on what to do next, I am back in business. And after "the summer of the blog engine", I'll go back to focus more on architectural topics -- including here.

Categories: Blog | Talks

The new newtelligence homepage now runs on top of it. In fact, there are a couple of features like the nested categories and the whole localization story that only made it into the blog engine, because we wanted to use dasBlog for that purpose as well.

Categories: dasBlog

September 15, 2003
@ 10:36 AM

The temples of the old Khmer empires in Angkor (Siem Reap) in Cambodia are truly amazing and a must see for anyone interested in ancient cultures. (Although they are actually medieval on the Western time scale considering the time they were built – between 900AD and 1300AD). The picture shows the most famous and best preserved temple, Angkor Wat, which can only be compared in terms of overall scale and work effort to the great Cheops pyramid in Giza. Angkor Wat is the biggest religious site on the planet.

 

I don’t even know where to start writing about how impressive the Angkor sites are and I am certainly not the least bit qualified to describe them properly, so the best is for you to check this very informative guide to the Angkor monuments.

 

In Siem Reap we stayed (luckily) at one of the two best hotels in town, at the Sofitel, which was US$100 a night, but which I can highly recommend if you want to avoid a major culture shock. Cambodia is one of the poorest countries in the world and while Siem Reap doesn’t immediately reflect this, staying at the Sofitel is certainly the best thing a western tourist can do who is not of the adventurer/backpacker type. I spoke to a Swiss tour organizer who specializes in Cambodia tours and he told me that he consolidated his hotel list to only 4 hotels in Siem Reap and the Sofitel easily tops his favorites list – and it’s not the most expensive one. In general, Siem Reap is not a very cheap place to go to considering all cost, but it’s all money well spent if you consider that tourism is the primary source of income and the economic engine for literally hundreds of  thousand of people in the Siem Reap region and that Angkor is still mostly a destination for “those who know”.

 

The entry fee for all of the Angkor sites is US$20 per day and person or US$60 for a three day pass. A good local tour guide and a taxi driver will cost you between US$30 and US$40 per day. You’ll need both and you shouldn’t try to explore the sites just with a book – the guides speak good English (a German speaking guide will ask US$10/day more) and are usually very well educated about the sites and they will fill you in with all the religious background and legends that you will need to understand to appreciate the art. Food can be very cheap (less than US$1 for a meal) if you are one of the daring types with a strong stomach or between US$10 and US$30 at a hotel or at the very few proper restaurants, if you are such a civilization wimp as I am.

 

What you definitely need is lots of sun-block, anti-mosquito spray, light clothes and a hat. Even in the rain season (which is now) it’s very hot around noon time and the humidity is easily at >90%. But that’s not so different from Cairo ;)

Categories: Cambodia

September 15, 2003
@ 05:27 AM

Hey, Don! Your "API of the day" entries make me wonder whether you got bored w/ XML. Isn't this counter revolutionary activity?

I actually checked twice whether I am looking at the right date ;)

+1 on MkParseDisplayName(Ex)

 

 

Categories: COM

September 15, 2003
@ 04:43 AM

I am safely back from my Asia tour and Patricia and I have seen lots of very cool places and I am sure going to post some pictures today and tomorrow.  The one thing that didn’t really work well for me was Internet access so I was essentially offline for the last two weeks. So, first things first, below you’ll finally find the download links for the demos of my talks in Malaysia.

Download: FlightsRUs.zip
Download: newtelligenceSDK-2-21-3239-0.zip
Download: NorthwindTechEdMalaysia.zip

Categories: TechEd Malaysia | Travel

I have to admit that I was a bit hesitant when Patricia came up with the idea to go to Vietnam on the “leisure-loop” of our South-East Asia trip. The country is, of course, still controlled by a socialist party and is even still “Socialist Republic”, and I really knew very little about Vietnam except for the horrible historic events of the 1960s and 1970s. Of course I was absolutely wrong being concerned about our safety and Ho Chi Minh City isn’t socialistically dull and boring, at all. In fact, it’s great fun!

DungTuangWe met Dung (right) and Tuang (on the left) at a street corner where they asked us whether we wanted to take a tour on their bike-carts. One hour per person for $3 US. Although Vietnam does of course have a proper local currency (the exchange rate is about 15,500.00 VND for 1 USD), everything can be paid for in U.S. Dollars and that’s actually the preferred way of payment. In fact, $3 USD/hr is relatively expensive already considering that locals can get a full meal for less than 30 cents and that you can buy 2.5 liters of (surprisingly good) local draught beer for about 30,000 VND if you (a) find the right place and (b) have local people with you as we did.

However … Dung and Tuang’s services are easily worth their money. They are very friendly (it seems like any other Vietnamese person you could meet in the streets of HCMC is like that), they speak OK-enough English for a conversation and for them to explain a couple of things here and there and of course they know the places to go. But their most amazing skill is navigating through the traffic chaos of Ho Chi Minh City. In “HCMC”, you see a couple of cars here and there, but the streets are dominated by thousands of quadrillion-bazillons of small motorbikes. And of course, nobody pays any close attention to traffic rules (if there are any), but mysteriously, it just works. Even if you go with a slow bike-cart against a one-way street smack in the middle of the road, the traffic flows magically around you and you never get a feeling of being in danger. The secret seems to be that everyone drives very slowly and everyone seems very alert. I would think that the average speed in traffic is about 25-30 km/h. Dung and Tuang took us around for about 6 hours for the money equivalent of 3 rollercoaster-rides at the “Kirmes” in Düsseldorf complete with the entire thrill but a lot more fun.

What becomes very apparent even as you approach the city-center from the airport is that all that seems left of the “Socialist Republic” are the occasional paroles on street posters, but otherwise the market rules. We were told that it’s very different outside the two big cities Ho Chi Minh City and Hanoi and that the agricultural collectives are still the common organization of work there, but there’s no trace of what I think is a “Socialist Republic”, of course having East Germany in mind as an example. Quite to the contrary, Saigon (you will find the old name used much more frequently than “Ho Chi Minh City”; “Sai Gon” is in fact the name of the central 1st district) is a very colorful and vibrant city with a lot of very visible entrepreneurial spirit.

While it seems to be a fun place to be (I spoke to an English guy who went there for a three day trip, went home, quit his job and is now there for 9 months already), don’t expect too much great sightseeing experiences. There are a couple of things to see, but nothing too spectacular. “The War Remnants Museum” on the premises of the old U.S. embassy (from which the last U.S. troops were evacuated by swarms of helicopters), shows a couple of U.S. weapons that were left behind, including the obvious Huey Chopper, an F-5 fighter, a couple of tanks and all sorts of short range missiles. The most horrible two weapons on display are two “daisy cutters” – 7 ton fuel-bombs that annihilate all life within a 500m radius of their point of detonation. The most horrible pictures on display in the adjacent exhibitions halls are those that illustrate the short and long-term effects of the infamous “Agent Orange” on people. Interestingly enough, the rest of the exhibition – except some obvious propaganda in a hall illustrating the support of the world for Vietnam in the times of war – is mostly from U.S. publication sources and photographers, which gives the exhibition some (strange) balance. The entrance fee is 10,000 VND per person.

Otherwise, Saigon is a very interesting and friendly place to explore if you have two or three days while in South East Asia. The Vietnamese people have long understood what the Dollar and Euros are not only worth to them but also to you so don’t expect it to be very cheap. Saigon seems more expensive than, for instance, Bangkok and is much less developed at the same time. However, just from feeling much less “westernized” than any of the other Asian cities that I’ve been to, it draws its appeal.

Categories: Vietnam

September 1, 2003
@ 09:35 AM
Samples from TechEd Malaysia should be posted by the end of this week due to lack of Internet access.
Categories: TechEd Malaysia