A flock of pigs has been doing aerobatics high up over Microsoft Campus in Redmond in the past three weeks. Neither City of Redmond nor Microsoft spokespeople returned calls requesting comments in time for this article. An Microsoft worker who requested anonymity and has seen the pigs flying overhead commented that "they are as good as the Blue Angels at Seafair, just funnier" and "they seem to circle over building 42 a lot, but I wouldn't know why".

In related news ...

We wrapped up the BizTalk Services "R11" CTP this last Thursday and put the latest SDK release up on http://labs.biztalk.net/. As you may or may not know, "BizTalk Services" is the codename for Microsoft's cloud-based Identity and Connectivity services - with a significant set of further services in the pipeline. The R11 release is a major milestone for the data center side of BizTalk Services, but we've also added several new client-facing features, especially on the Identity services. You can now authenticate using a certificate in addition to username and CardSpace authentication, we have enabled support for 3rd party managed CardSpace cards, and there is extended support for claims based authorization.

Now the surprising bit:

Only about an hour before we locked down the SDK on Thursday, we checked a sample into the samples tree that has a rather unusual set of prerequisites for something coming out of Microsoft:

Runtime: Java EE 5 on Sun Glassfish v2 + Sun WSIT/Metro (JAX-WS extensions), Tool: Netbeans 6.0 IDE.

The sample shows how to use the BizTalk Services Identity Security Token Service (STS) to secure the communication between a Java client and a Java service providing federated authentication and claims-based authorization.

The sample, which you can find in ./Samples/OtherPlatforms/StandaloneAccessControl/JavaEE5 once you installed the SDK, is a pure Java sample not requiring any of our bits on either the service or client side. The interaction with our services is purely happening on the wire.

If you are a "Javahead", it might seem odd that we're shipping this sample inside a Windows-only MSI installer and I will agree that that's odd. It's simply a function of timing and the point in time when we knew that we could get it done (some more on that below). For the next BizTalk Services SDK release I expect there to be an additional .jar file for the Java samples.

It's important to note that this isn't just a thing we did as a one-time thing and because we could. We have done a significant amount of work on the backend protocol implementations to start opening up a very broad set of scenarios on the BizTalk Services Connectivity services for platforms other than .NET. We already have a set of additional Java EE samples lined up for when we enable that functionality on the backend. However, since getting security and identity working is a prerequisite for making all other services work, that's where we started. There'll be more and there'll be more platform and language choice than Java down the road.

Just to be perfectly clear: Around here we strongly believe that .NET and the Windows Communication Foundation in particular is the most advanced platform to build services, irrespective of whether they are of the WS-* or REST variety. If you care about my personal opinion, I'll say that several months of research into the capabilities of other platforms has only reaffirmed that belief for me and I don't even need to put a Microsoft hat on to say that.

But we recognize and respect that there are a great variety of individual reasons why people might not be using .NET and WCF. The obvious one is "platform". If you run on Linux or Unix and/or if your deployment target is a Java Application Server, then your platform is very likely not .NET. It's something else. If that's your world, we still think that our services are something that's useful for your applications and we want to show you why. And it is absolutely not enough for us to say "here is the wire protocol documentation; go party!". Only Code is Truth.

I'm also writing "Only Code is Truth" also because we've found - perhaps not too surprisingly - that there is a significant difference between reading and implementing the WS-* specs and having things actually work. And here I get to the point where a round of public "Thank You" is due:

The Metro team over at Sun Microsystems has made a very significant contribution to making this all work. Before we started making changes to accommodate Java, there would have been very little hope for anyone to get this seemingly simple scenario to work. We had to make quite a few changes even though our service did follow the specs.

While we were adjusting our backend STS accordingly, the Sun Metro team worked on a set of issues that we identified on their end (with fantastic turnaround times) and worked those into their public nightly builds. The Sun team also 'promoted' a nightly build of Metro 1.2 to a semi-permanent download location (the first 1.2 build that got that treatment), because it is the build tested to successfully interop with our SDK release, even though that build is known to have some regressions for some of their other test scenarios. As they work towards wrapping up their 1.2 release and fix those other bugs, we’ll continue to test and talk to help that the interop scenarios keep working.

As a result of this collaboration, Metro 1.2 is going to be a better and more interoperable release for the Sun's customers and the greater Java community and BizTalk Services as well as our future identity products will be better and more interoperable, too. Win-Win. Thank you, Sun.

As a goodie, I put some code into the Java sample that might be useful even if you don't even care about our services. Since configuring the Java certificate stores for standalone applications can be really painful, I added some simple code that's using a week-old feature of the latest Metro 1.2 bits that allows configuring the Truststores/Keystores dynamically and pull the stores from the client's .jar at runtime. The code also has an authorization utility class that shows how to get and evaluate claims on the service side by pulling the SAML token out of the context and pulling the correct attributes from the token.

Have fun.

[By the way, this is not an April Fool's joke, in case you were wondering]

Categories: Architecture | IT Strategy | Technology | CardSpace | ISB | WCF

Steve has a great analysis of what BizTalk Services means for Corzen and how he views it in the broader industry context.

Categories: Architecture | SOA | IT Strategy | Technology | BizTalk | WCF | Web Services

April 25, 2007
@ 03:28 AM

"ESB" (for "Enterprise Service Bus") is an acronym floating around in the SOA/BPM space for quite a while now. The notion is that you have a set of shared services in an enterprise that act as a shared foundation for discovering, connecting and federating services. That's a good thing and there's not much of a debate about the usefulness, except whether ESB is the actual term is being used to describe this service fabric or whether there's a concrete product with that name. Microsoft has, for instance, directory services, the UDDI registry, and our P2P resolution services that contribute to the discovery portion, we've got BizTalk Server as a scalable business process, integration and federation hub, we've got the Windows Communication Foundation for building service oriented applications and endpoints, we've got the Windows Workflow Foundation for building workflow-driven endpoint applications, and we have the Identity Platform with ILM/MIIS, ADFS, and CardSpace that provides the federated identity backplane.

Today, the division I work in (Connected Systems Division) has announced BizTalk Services, which John Shewchuk explains here and Dennis Pilarinos drills into here.

Two aspects that make the idea of a "service bus" generally very attractive are that the service bus enables identity federation and connectivity federation. This idea gets far more interesting and more broadly applicable when we remove the "Enterprise" constraint from ESB it and put "Internet" into its place, thus elevating it to an "Internet Services Bus", or ISB. If we look at the most popular Internet-dependent applications outside of the browser these days, like the many Instant Messaging apps, BitTorrent, Limewire, VoIP, Orb/Slingbox, Skype, Halo, Project Gotham Racing, and others, many of them depend on one or two key services must be provided for each of them: Identity Federation (or, in absence of that, a central identity service) and some sort of message relay in order to connect up two or more application instances that each sit behind firewalls - and at the very least some stable, shared rendezvous point or directory to seed P2P connections. The question "how does Messenger work?" has, from an high-level architecture perspective a simple answer: The Messenger "switchboard" acts as a message relay.

The problem gets really juicy when we look at the reality of what connecting such applications means and what an ISV (or you!) were to come up with the next cool thing on the Internet:

You'll soon find out that you will have to run a whole lot of server infrastructure and the routing of all of that traffic goes through your pipes. If your cool thing involves moving lots of large files around (let's say you'd want to build a photo sharing app like the very unfortunately deceased Microsoft Max) you'd suddenly find yourself running some significant sets of pipes (tubes?) into your basement even though your users are just passing data from one place to the next. That's a killer for lots of good ideas as this represents a significant entry barrier. Interesting stuff can get popular very, very fast these days and sometimes faster than you can say "Venture Capital".

Messenger runs such infrastructure. And the need for such infrastructure was indeed an (not entirely unexpected) important takeaway from the cited Max project. What looked just to be a very polished and cool client app to showcase all the Vista and NETFX 3.0 goodness was just the tip of a significant iceberg of (just as cool) server functionality that was running in a Microsoft data center to make the sharing experience as seamless and easy as it was. Once you want to do cool stuff that goes beyond the request/response browser thing, you easily end up running a data center. And people will quickly think that your application sucks if that data center doesn't "just work". And that translates into several "nines" in terms of availability in my book. And that'll cost you.

As cool as Flickr and YouTube are, I don't think of none of them or their brethren to be nearly as disruptive in terms of architectural paradigm shift and long-term technology impact as Napster, ICQ and Skype were as they appeared on the scene. YouTube is just a place with interesting content. ICQ changed the world of collaboration. Napster's and Skype's impact changed and is changing entire industries. The Internet is far more and has more potential than just having some shared, mashed-up places where lots of people go to consume, search and upload stuff. "Personal computing" where I'm in control of MY stuff and share between MY places from wherever I happen to be and NOT giving that data to someone else so that they can decorate my stuff with ads has a future. The pendulum will swing back. I want to be able to take a family picture with my digital camera and snap that into a digital picture frame at my dad's house at the push of a button without some "place" being in the middle of that. The picture frame just has to be able to stick its head out to a place where my camera can talk to it so that it can accept that picture and know that it's me who is sending it.

Another personal, and very concrete and real point in case: I am running, and I've written about that before, a custom-built (software/hardware) combo of two machines (one in Germany, one here in the US) that provide me and my family with full Windows Media Center embedded access to live and recorded TV along with electronic program guide data for 45+ German TV channels, Sports Pay-TV included. The work of getting the connectivity right (dynamic DNS, port mappings, firewall holes), dealing with the bandwidth constraints and shielding this against unwanted access were ridiculously complicated. This solution and IP telephony and video conferencing (over Messenger, Skype) are shrinking the distance to home to what's effectively just the inconvenience of the time difference of 9 hours and that we don't see family and friends in person all that often. Otherwise we're completely "plugged in" on what's going on at home and in Germany in general. That's an immediate and huge improvement of the quality of living for us, is enabled by the Internet, and has very little to do with "the Web", let alone "Web 2.0" - except that my Program Guide app for Media Center happens to be an AJAX app today. Using BizTalk Services would throw out a whole lot of complexity that I had to deal with myself, especially on the access control/identity and connectivity and discoverability fronts. Of course, as I've done it the hard way and it's working to a degree that my wife is very happy with it as it stands (which is the customer satisfaction metric that matters here), I'm not making changes for technology's sake until I'm attacking the next revision of this or I'll wait for one of the alternative and improving solutions (Orb is on a good path) to catch up with what I have.

But I digress. Just as much as the services that were just announced (and the ones that are lined up to follow) are a potential enabler for new Napster/ICQ/Skype type consumer space applications from innovative companies who don't have the capacity or expertise to run their own data center, they are also and just as importantly the "Small and Medium Enterprise Service Bus".

If you are an ISV catering shrink-wrapped business solutions to SMEs whose network infrastructure may be as simple as a DSL line (with dynamic IP) that goes into a (wireless) hub and is as locked down as it possibly can be by the local networking company that services them, we can do as much as we want as an industry in trying to make inter-company B2B work and expand it to SMEs; your customers just aren't playing in that game if they can't get over these basic connectivity hurdles.

Your app, that lives behind the firewall shield and NAT and a dynamic IP, doesn't have a stable, public place where it can publish its endpoints and you have no way to federate identity (and access control) unless you are doing some pretty invasive surgery on their network setup or you end up building and running run a bunch of infrastructure on-site or for them. And that's the same problem as the mentioned consumer apps have. Even more so, if you look at the list of "coming soon" services, you'll find that problems like relaying events or coordinating work with workflows are very suitable for many common use-cases in SME business applications once you imagine expanding their scope to inter-company collaboration.

So where's "Megacorp Enterprises" in that play? First of all, Megacorp isn't an island. Every Megacorp depends on lots of SME suppliers and retailers (or their equivalents in the respective lingo of the verticals). Plugging all of them directly into Megacorp's "ESB" often isn't feasible for lots of reasons and increasingly less so if the SME had a second or third (imagine that!) customer and/or supplier. 

Second, Megacorp isn't a uniform big entity. The count of "enterprise applications" running inside of Megacorp is measured in thousands rather than dozens. We're often inclined to think of SAP or Siebel when we think of enterprise applications, but the vast majority are much simpler and more scoped than that. It's not entirely ridiculous to think that some of those applications runs (gasp!) under someone's desk or in a cabinet in an extra room of a department. And it's also not entirely ridiculous to think that these applications are so vertical and special that their integration into the "ESB" gets continuously overridden by someone else's higher priorities and yet, the respective business department needs a very practical way to connect with partners now and be "connectable" even though it sits deeply inside the network thicket of Megacorp. While it is likely on every CIO's goal sheet to contain that sort of IT anarchy, it's a reality that needs answers in order to keep the business bring in the money.

Third, Megacorp needs to work with Gigacorp. To make it interesting, let's assume that Megacorp and Gigacorp don't like each other much and trust each other even less. They even compete. Yet, they've got to work on a standard and hence they need to collaborate. It turns out that this scenario is almost entirely the same as the "Panic! Our departments take IT in their own hands!" scenario described above. At most, Megacorp wants to give Gigacorp a rendezvous and identity federation point on neutral ground. So instead of letting Gigacorp on their ESB, they both hook their apps and their identity infrastructures into the ISB and let the ISB be the mediator in that play.

Bottom line: There are very many solution scenarios, of which I mentioned just a few, where "I" is a much more suitable scope than "E". Sometimes the appropriate scope is just "I", sometimes the appropriate scope is just "E". They key to achieve the agility that SOA strategies commonly promise is the ability to do the "E to I" scale-up whenever you need it in order to enable broader communication. If you need to elevate one or a set services from your ESB to Internet scope, you have the option to go and do so as appropriate and integrated with your identity infrastructure. And since this all strictly WS-* standards based, your "E" might actually be "whatever you happen to run today". BizTalk Services is the "I".

Or, in other words, this is a pretty big deal.

Categories: Architecture | SOA | IT Strategy | Microsoft | MSDN | BizTalk | WCF | Web Services

December 20, 2006
@ 11:07 PM

It's been slashdotted and also otherwise widely discussed that Google has deprecated their SOAP API. A deadly blow for SOAP as people are speculating? Guess not.

What I find striking are the differences in the licenses between the AJAX API and the SOAP API. That's where the beef is. While the results obtained through the SOAP API can be used (for non-commercial purposes) practically in any way except that "you may not use the search results provided by the Google SOAP Search API service with an existing product or service that competes with products or services offered by Google.", the AJAX API is constrained to use with web sites with the terms of use stating that "The API is limited to allowing You to host and display Google Search Results on your site, and does not provide You with the ability to access other underlying Google Services or data."

The AJAX API is a Web service that works for Google because its terms of use are very prescriptive for how to build a service that ensures Google's advertising machine gets exposure and clicks. That's certainly a reasonable business decision, but has nothing to do with SOAP vs. REST or anything else technical. There's just no money in application-to-application messaging for Google (unless they'd actually set up an infrastructure to charge for software as a service and provide support and proper SLAs for it that is saying more than "we don't make any guarantees whatsoever") while there's a lot of money for them in being able to get lots and lots of people to give them a free spot on their own site onto which they can place their advertising. That's what their business is about, not software.

Categories: IT Strategy | Technology | Web Services

I was sad when "Indigo" and "Avalon" went away. It'd be great if we'd have a pool of cool legal-approved code-names for which we own the trademark rights and which we could stick to. Think Delphi or Safari. "Indigo" was cool insofar as it was very handy to refer to the technology set, but was removed far enough from the specifics that it doesn't create a sharply defined, product-like island within the larger managed-code landscape or has legacy connotations like "ADO.NET".  Also, my talks these days could be 10 minutes shorter if I could refer to Indigo instead of "Windows Communications Foundation". Likewise, my job title wouldn't have to have a line wrap on the business card of I ever spelled it out in full.

However, when I learned about the WinFX name going away (several weeks before the public announcement) and the new "Vista Wave" technologies (WPF/WF/WCF/WCS) being rolled up under the .NET Framework brand, I was quite happy. Ever since it became clear in 2004 that the grand plan to put a complete, covers-all-and-everything managed API on top (and on quite a bit of the bottom) of everything Windows would have to wait until siginificantly after Vista and that therefore the Win16>Win32>WinFX continuity would not tell the true story, that name made only limited sense to stick to. The .NET Framework is the #1 choice for business applications and a well established brand. People refer to themselves as being "dotnet" developers. But even though the .NET Framework covers a lot of ground and "Indigo", "Avalon", "InfoCard", and "Workflow" are overwhelmingly (or exclusively) managed-code based, there are still quite a few things in Windows Vista that still require using P/Invoke or COM/Interop from managed code or unmanaged code outright. That's not a problem. Something has to manage the managed code and there's no urgent need to rewrite entire subsystems to managed code if you only want to add or revise features. 

So now all the new stuff is now part of the .NET Framework. That is a good, good, good change. This says what it all is.

Admittedly confusing is the "3.0" bit. What we'll ship is a Framework 3.0 that rides on top of the 2.0 CLR and includes the 2.0 versions of the Base-Class Library, Windows Forms, and ASP.NET. It doesn't include the formerly-announced-as-to-be-part-of-3.0 technologies like VB9 (there you have the version number consistency flying out the window outright), C# 3.0, and LINQ. Personally, I think that it might be a tiny bit less confusing if the Framework had a version-number neutral name such as ".NET Framework 2006" which would allow doing what we do now with less potential for confusion, but only a tiny bit. Certainly not enough to stage a war over "2006" vs. "3.0".

It's a matter of project management reality and also one of platform predictability that the ASP.NET, or Windows Forms teams do not and should not ship a full major-version revision of their bits every year. They shipped Whidbey (2.0) in late 2005 and hence it's healthy for them to have boarded the scheduled-to-arrive-in-2007 boat heading to Orcas. We (the "WinFX" teams) subscribed to the Vista ship docking later this year and we bring great innovation which will be preinstalled on every copy of it. LINQ as well as VB9 and C# incorporating it on a language-level are very obviously Visual Studio bound and hence they are on the Orcas ferry as well. The .NET Framework is a steadily growing development platform that spans technologies from the Developer Division, Connected Systems, Windows Server, Windows Client, SQL Server, and other groups, and my gut feeling is that it will become the norm that it will be extended off-cycle from the Developer Division's Visual Studio and CLR releases. Whenever a big ship docks in the port, may it be Office, SQL, BizTalk, Windows Server, or Windows Client, and as more and more of the still-unmanaged Win32/Win64 surface area gets wrapped, augmented or replaced by managed-code APIs over time and entirely new things are added, there might be bits that fit into and update the Framework.  

So one sane way to think about the .NET Framework version number is that it merely labels the overall package and not the individual assemblies and components included within it. Up to 2.0 everything was pretty synchronized, but given the ever-increasing scale of the thing, it's good to think of that being a lucky (even if intended) coindicence of scheduling. This surely isn't the first time that packages were versioned independently of their components. There was and is no reason for the ASP.NET team to gratuitously recompile their existing bits with a new version number just to have the GAC look pretty and to create the illusion that everything is new - and to break Visual Studio compatibility in the process.

Of course, once we cover 100% of the Win32 surface area, we can rename it all into WinFX again ;-)  (just kidding)

[All the usual "personal opinion" disclaimers apply to this post]

Update: Removed reference to "Win64".

Categories: IT Strategy | Technology | ASP.NET | Avalon | CLR | Indigo | Longhorn | WCF | Windows

My blogging efforts this year aren’t really impressive, are they? Well, the first half of the year I was constantly on the road at a ton of conferences and events and didn’t really get the time to blog much. After TechEd Europe, I was simply burned out, took three weeks of vacation to recover somewhat and since then I’ve been trying to get some better traction with areas of Visual Studio that I hadn’t really looked at well enough since Beta 2 came out. And of course there’s WinFX with Avalon and Indigo that I need to track closely.

Now, of course I have the luxury of being able to dedicate a lot of time to learning, because taking all the raw information in, distilling it down and making it more accessible to folks who can’t spend all that time happens to be my job. However, I am finding myself in this rather unfortunate situation again that I will have to throw some things overboard and will have to focus on a set of areas.

At some point rather early in my career I decided that I just can’t track all hardware innovations anymore. I certainly still know what’s generally going on, but when I buy a new machine I am just a client for Dell, Siemens or Alienware like the next guy. I don’t think I could make a qualified enough choice on good hardware components to build my own PC nowadays and I actually have little interest to do so. All that stuff comes wrapped in a case and if I don’t really have to open it to put an extension card in, I have no interest to look inside. The same goes for x86 assembly language. The platform is still essentially the same even after more than a decade, and whenever Visual Studio is catching an exception right in the middle of “the land of no source code”, I can – given I have at least the debug symbols loaded – actually figure out where I am and, especially if it’s unmanaged code, often figure out what’s failing. But if someone were ask me about the instruction set innovations of the Pentium 4 processor generation I’ll happily point to someone else. In 2001, I wrote a book on BizTalk and probably knew the internals and how everything fits together as good as someone outside of Microsoft possibly could know it. BizTalk 2006 is so far away from me now that I’d have a hard time giving you the feature delta between the 2004 and 2006 versions. Over time, many things went over board that way; hardware and assembly being the first and every year it seems like something else needs to go.

The reason is very simple: Capacity. There’s a limit to how much information an individual can process and I think that by now, Microsoft managed to push the feature depth to the point where I can’t fit Visual Studio and related technologies into my head all at once any longer. In my job, it’s reasonable for people to expect that whenever I get up on stage or write an article that I give them the 10%-20% of pre-distilled “essentials” that they need 80% of the time out of a technology and that I know close to 100% of the stuff that’s underneath, so that they can ask me questions and I can give them good, well founded answers.  In the VS2003/NETFX 1.1 wave, I’ve done something (and even if it was just a demo) with every single public namespace and I am confident that I can answer a broad range of customer questions without having to guess.

Enter VS2005 and the summary of trying to achieve the same knowledge density is: “Frustrating”.

There is so much more to (have to) know, especially given that there’s now Team System and the Team Foundation Server (TFS). TFS is “designed to be customized” and the product makes that clear wherever you look. It is a bit like an ERP system in that way. You don’t really have a solution unless you set up a project to customize the product for your needs. Hence, “Foundation” is a more than fitting name choice.

I’ve been in a planning project for the past two weeks where the customer has a well thought out idea about their analysis, design and development processes and while TFS seems like a great platform for them, they will definitely need customized events, a custom-tailored version of MSF Agile with lots of new fields and custom code analysis and check-in policies, integration with and bi-directional data flow from/into “satellite products” such as a proper requirements analysis system, a help-desk solution and a documentation solution, and probably will even want to get into building their own domain specific language (DSL).  All of that is possible and the extensibility of Visual Studio and TFS is as broad as the customer would need it to be, but … who would know? The Team System Extensibility Kit has documentation to extend and customize process templates, work items, source control, the build system, the team explorer, test tools, and reporting and that’s just the headlines. Add the tools for custom domain specific languages (huge!) and the class designer extensibility and you’ve got more than enough stuff to put your head into complete overflow mode.

And at that point you haven’t even looked at the news in Windows Forms (where I like all the new data binding capabilities a lot) and much less at ASP.NET 2.0, which is an entire planet all by itself. Oh, and of course there is the new deployment model (aka “ClickOnce”), SQL 2005, with all those new features (whereby SQL/CLR is the least interesting to me) and BizTalk 2006 and, and, and …

And of course, my core interest is really with the Windows Vista WinFX technology wave including of course Indigo (don’t make me use “WCF”) and for me to a lesser degree Avalon (yes, yes: “WPF”) for which knowing the VS2005/NETFX 2.0 foundation is of course a prerequisite.

What kills me with Avalon, for instance, is that I’ve got quite a bit of the 2D stuff cornered and know how to do things even with just an XML editor in hands, but that the 3D stuff is nicely integrated and sits right there in front of me and I just don’t have the necessary knowledge depth about building 3D apps to do the simplest thing and not the time to acquire that knowledge. And I’ve got such great ideas for using that stuff.

It looks like it’s time to take some things off the table again and that’s an intensely frustrating decision to make.

Don’t get me wrong … I am not complaining about Microsoft adding too many features to the platform. Au contraire! I think that we’re seeing a wave of innovation that’s absolutely fantastic and will enable us out here to build better, more featured applications.

But for a development team to benefit from all these technologies, specialization is absolutely needed. The times when development teams had roughly the same technology knowledge breadth and everyone could do everything are absolutely coming to an end. And the number of generalists who have a broad, qualified overview on an entire platform is rapidly shrinking.

And the development teams will change shape. Come Avalon, and game developers (yes, game developers) will be in great demand in places that are as far away from gaming as you could imagine. I’ve just had meetings with a very conservative and large investment management company and they are thinking hard about adding multi-layer, alpha-blended, 3D data visualizations complete with animations and all that jazz to their trading system UIs, and they’ve got the business demand for it. Of course, the visualization experts won’t be data mining and data analysis or software integration specialists; that’s left for others to do.

For “generalists” like me, these are hard and frustrating times if they’re trying to stay generalists. Deep and consequent specialization is a great opportunity for everyone and the gamble is of course to pick the right technology to dig into and become “the expert” in. If that technology or problem space becomes the hottest thing everyone must have – you win your bet. Otherwise you might be in trouble.

Here are some ideas and “predictions” for such sought-after specialists – but, hey, that’s just my unqualified opinion:

·         Cross-platform Web service wire doctors. As much as all the vendors will try to make their service platforms such as Indigo and Web Sphere and Web Logic and Apache interoperable, customers will try hard to break it all by introducing “absolutely necessary” whacky protocol extensions and by using every extensibility point fathomable. As if that wasn’t hard enough already today where most interop happens with more or less bare-bones SOAP envelopes, just wait until bad ideas get combined with the full WS-* stack, including reliable messaging, security and routing. These folks of course will have to know everything about security aspects like federation, single-sign-on, etc.  

·         Visualization Developers. Avalon is a bit like an advanced 3D gaming engine for business application developers. While that seems like a whacky thing to say – just wait what’ll happen. Someone will come along and build a full-blown ERP or CRM package whose data visualization capabilities and advanced data capture methods will blow everything existing out of the water and everything with white entry fields on a gray background with boring fonts and some plain bar charts will suddenly look not much better than a green-screen app. In 3 years you will have people modeling 3D wire-frames on your development teams or you are history – and the type of app doesn’t really matter much.

·         Development Tool and Process Customization Specialists:  I expect Team System to become the SAP R/3 of software development. No deployment without customization, even if that only happens over time. Brace for the first product update that comes around and changes and extends the foundation’s data structures. I fully expect Team System and the Team Foundation Server to gain substantial market share and I fully expect that there’ll be a lot of people dying to get customization assistance.


That said: I am off to learn more stuff.

Categories: IT Strategy | Technology

February 15, 2005
@ 07:12 PM

CNet reports about Bill Gates’ announcement that Windows Anti-Spyware is going to be free includes the following truly puzzling quote from the Check Point Software CTO:

"I am glad to see Gates is focusing on securing the desktop," said Gregor Freund, chief technology officer of Check Point Software, which develops desktop security software. "However, there are some serious downsides to Microsoft's approach. Just by entering the security market, Microsoft could stall innovation by freezing any kind of spending of venture capital on Windows security which in the long run, will lead to less security, not more."

Is it just me or do you also consider the term “venture capital” as being a little out of place in this context?

Categories: IT Strategy | Other Stuff

I feel like I have been "out of business" for a really long time and like I really got nothing done in the past 3 months, even though that's objectively not true. I guess that's "conference & travel withdrawal", because I had tone and tons of bigger events in the first half of the year and 3 smaller events since TechEd Amsterdam in July. On the upside, I am pretty relaxed and have certainly reduced my stress-related health risks ;-)

So with winter and its short days coming up, the other half of my life living a 1/3 around the planet until next spring, I can and am going to spend some serious time on a bunch of things:

On the new programming stuff front:
     Catch up on what has been going on in Indigo in recent months, dig deeper into "everything Whidbey", figure out the CLR aspects of SQL 2005 and familiarize myself with VS Team System.

On the existing programming stuff front:
      Consolidate my "e:\development\*"  directory on my harddrive and pull together all my samples and utilities for Enterprise Services, ASP.NET Web Services and other enterprise-development technologies and create a production-quality library from of them for us and our customers to use. Also, because the Indigo team is doing quite a bit of COM/COM+ replumbing recently in order to have that prohgraming model ride on Indigo, I have some hope that I can now file bugs/wishes against COM+ that might have a chance of being addressed. If that happens and a particular showstopper is getting out of the way, I will reopen this project here and will, at the very least, release it as a toy.

On the architectural stuff front:
         Refine our SOA Workshop material, do quite a bit of additional work on the FABRIQ, evolve the Proseware architecture model, and get some pending projects done. In addition to our own SOA workshops (the next English-language workshop is held December 1-3, 2004 in Düsseldorf), there will be a series of invite-only Microsoft events on Service Orientation throughout Europe this fall/winter, and I am very happy that I will be speaking -- mostly on architecture topics -- at the Microsoft Eastern Mediterranean Developer Conference in Amman/Jordan in November and several other locations in the Middle East early next year. 

And even though I hate the effort around writing books, I am seriously considering to write a book about "Services" in the next months. There's a lot of stuff here on the blog that should really be consolidated into a coherent story and there are lots and lots of considerations and motiviations for decisons I made for FABRIQ and Proseware and other services-related work that I should probably write down in one place. One goal of the book would be to write a pragmatic guide on how to design and build services using currently shipping (!) technologies that does focus on how to get stuff done and not on how to craft new, exotic SOAP headers, how to do WSDL trickery, or do other "cool" but not necessarily practical things. So don't expect a 1200 page monster. 

In addition to the "how to" part, I would also like to incorporate and consolidate other architect's good (and bad) practical design and implementation experiences, and write about adoption accelerators and barriers, and some other aspects that are important to get the service idea past the CFO. That's a great pain point for many people thinking about services today. If you would be interested in contributing experiences (named or unnamed), I certainly would like to know about it.

And I also think about a German-to-English translation and a significant (English) update to my German-language Enterprise Services book.....

[And to preempt the question: No, I don't have a publisher for either project, yet.]

Categories: Architecture | SOA | Blog | IT Strategy | newtelligence | Other Stuff | Talks

Microsoft Watch highlights the recently surfaced HP memo that speculates that Microsoft would start enforcing its patent portfolio on Open Source. How likely is it? It is an interesting question, indeed. Here’s what I think:

The patent situation, especially on the middleware market, used to be very much like the cold war between the USSR and the USA in the last century. One side moves, everyone dies. My guess is that if Microsoft had gone out and dragged Sun to court over J2EE and Sun had countersued over .NET, things would have gotten really, really nasty. The very foundations of the J2EE stack are sitting right in the middle of a substantial Microsoft patent minefield covering what we know as MTS and COM+. The reverse doesn’t look much better. Now Sun and Microsoft made peace on that front and are even looking to negotiate a broad IP cross-licensing deal to end that sort of arms race. Cross-licensing of patents is quite common in this industry and in most other industries as well. So where does that leave the grassroots Open Source movement? Not in a good place, for sure.

If you do research and you pour millions or even billions into that research, there has to be some return on that investment. And there is a difference between academic research and research that yields commercial products. I am not saying that there is no close relationship between the two, but both are done with different goals. If you do research for commercial purposes, regardless of whether you do it in the automotive industry or in the pharmaceutical industry or even in the software industry, the results of your research deserve protection. At the same time, it is harmful to the society at-large, if everyone would keep all results of all research under wraps. So governments offer companies a deal: You disclose the results of your research and we grant you a limited-time monopoly to use that technology exclusively. If you decide to share the technology with other parties, you can be forced to allow third parties to license it on appropriate terms. And the German Patent law §11 (1) , for instance, explicitly states that patents do not cover noncommercial use of technology in a private environment.

Now, if states offer that sort of system, a company that is built almost only on intellectual property (like Sun, IBM, Oracle, Apple, Microsoft and so forth) must play with the system. The must file for patents. If they don’t, they end of with something like the Eolas mess in their hands and that is not pretty. Even if some of the patents seem absolutely ridiculous; if the patent lawyers at a large company figure out that a certain technology is not covered by an existing patent, they must go and protect it. Not necessarily to enforce it, but rather to avoid that someone else enforces it on them. And because a lot of these patents are indeed idiotic, such are rarely ever enforced and most often quite liberally licensed. Something similar is true for trademarks. Microsoft has no choice but to chase Lindows (now Linspire) or even poor “Mike Rowe Soft” because they must defend their trademarks, by law. If they don’t and let a case slip, they might lose them. It’s not about being nasty, it’s just following the rules that lawmakers have set.

Now, if someone starts cheating on the research front and consumes IP from that system but never contributes IP to the system, it does indeed change the ballgame. If you don’t have a patent portfolio that is interesting enough for someone else to enter a (possibly scoped) cross-licensing deal with you and you don’t license such patents for money but instead break the other parties’ rightfully (it’s the law!) acquired, time-limited monopolies on commercial use of the respective technologies and you do so for profit, then you are simply violating the rules of the law. That’s as simple as it is. So, if I held Sun’s or Microsoft’s patent portfolio, would I ask those who profit from commercialization of those patents for my share? I really might give it some serious consideration. I think companies like Red Hat make wonderful targets, because they are commercial entities that profit greatly from a lot of IP that they do not (as I suspect) have properly licensed for commercial exploitation. The interesting this is that my reading of the (German) patent law is that the non-profit Apache Foundation can actually use patented technology without being at risk, but a for-profit company cannot adopt their results without being liable to acquire a license. Even giving away “free” software in order to benefit from the support services is commercialization. So if Red Hat includes some Apache project’s code that steps on patents, I’d say they are in trouble.

Now, if someone were to “reimplement” a patented drug, the pharmaceutical company sitting on the patent would sue them out of existence the next second without even blinking. Unless I am really badly informed, the entire biotech industry is entirely built on IP protection. All these small biotech firms are doing research that eventually yields protected IP and that’s what they look to turn into profit. They’re not in the business of producing and distributing the resulting drugs on a world-wide scale, they look to share the wealth with the pharmaceutical giants that have the respective infrastructure. The software industry is a very, very tame place against what’s going on in other industries. So will Sun, IBM, Oracle, Apple, and/or Microsoft eventually become more serious about drawing profit from the rights they hold? Right now it would be a very, very stupid thing to do in terms of the resulting, adverse marketing effect.

Now imagine Sun’s unfortunate decline keeps going or some other technology company with a substantial patent portfolio (and not some weak copyright claims) falls into the hands of a litigious bunch of folks as in the case of SCO.  That’s when the shit is going to hit the fan. Big time.

Categories: IT Strategy

February 26, 2004
@ 06:40 PM

If Sun were actually to open-source (that's a verb now, is it?) Java as IBM demands, IBM would finally own it. They've got more resources and they'd throw them at the problem, easily taking away the leadership in the Java space. Sun would just be sitting there, watching in disbelief what happens to what used to be their stuff. That's really what IBM wants and I am amazed how clever they are about it.

So, let's assume Sun would fall for it. Then there would be a core Java environment that's open-source and the J2EE (the stuff that really matters) implementations would still be closed? How much would that be worth? So the next logical call is to say "let's open up all those J2EE app servers and related infrastructures, too." And there we have IBM making WebSphere a freebie they throw into projects (isn't that done, anyways?) and killing off most of the software revenue models of their competition while happily buzzing along with their huge global services operation and their server hardware business.

Of course that's just an evil conspiracy theory.

Categories: IT Strategy

Blah, Blah ... "...if someone is using it who doesn't have a clue. Just like us."  Dismissing vulnerabilities in any operating system like that just to turn around and bash Microsoft while they're at it is just idiotic.

Categories: IT Strategy

I see quite a few models for Service Oriented Architectures that employ pipelines with validating "gatekeeper" stages that verify whether inbound messages are valid according to an agreed contract. Validation on inbound messages is a reactive action resulting from distrust of the communication partner's ability to adhere to the contract. Validation on inbound messages shields a service from invalid input data, but seen from the perspective of the entire system, the action occurs too late.

What I see less often is a gatekeeper on outbound channels that verifies whether the currently executing local service adheres to the agreed communication contract. Validation on outbound messages is a proactive action taken in order to create trust with partners about the local service's ability to adhere to a contract. Furthermore, validation on outbound messages is quite often the last chance action before a well-known point of no return: the transaction boundary. If a service is faulty, for whatever reason, it needs to consistently fail and abort transactions instead of emitting incorrect messages that are in violation of the contract. If the service is faulty, it must consequently be assumed that compensating recovery strategies will not function properly and with the desired result.

Exception information that is generated on an inbound channel, especially in asynchronous one-way scenarios, vanishes into a log file at a location/organization that may not even own the sending service that's in violation of the contract. The only logical place to detect contract violations in order to isolate and efficiently eliminate problems is on the outbound, not on the inbound channel. Eliminating problems may mean to fix problems in the software, allow manual correction by an operator/clerk or an automatic rejection/rollback/retry of the operation yielding the incorrect result. None of these corrective actions can be done in a meaningful way by the message recipient. The recipient can shield itself, and that is and remains very important. However, it's just a desperate act of digging oneself in when the last line of defense did already fall.

Categories: Architecture | IT Strategy

October 6, 2003
@ 11:32 AM

If you are given a hard time, because a project is late, there's now a good example to point to and say "well, it could be a lot worse".

The German TollCollect consortium that is contracted by the German government to build and operate a GSM/GPS based system to collect a country-wide, per-kilometer road use toll from trucks is currently caught right in the middle of one of the most publicized, most political and most costly IT project delays in Germany, ever. The problems seem to be a nightmarish combination of problems arising from in a huge project slammed together in a hurry and the blame must be given to both sides, the government and the industry partners. However, the result is absolutely disastrous for the already overstressed state budgets: German taxpayers are currently losing at least US$150 million (€130 million) a month just because of the project delays.

Categories: IT Strategy

Javier Gonzalez sent me a mail today on my most recent SOA post and says that it resonates with his experience:

I just read your article about services and find it very interesting. I have been using OOP languages to build somewhat complex systems for the last 5 years and even if I have had some degree of success with them, I usually find myself facing those same problems u mention (why, for instance, do I have to throw an exception to a module that doesn't know how to deal with it?). Yes, objects in a well designed OOP systems are *supposed* to be loosely coupled, but then, is that really possible to completely achieve? So I do agree with u SOA might be a solution to some of my nightmares. Only one thing bothers me, and that is service implementation. Services, and most of all Web Services only care about interfaces, or better yet, contracts, but the functionality that those contracts provide have to be implemented in some way, right? Being as I am an "object fan" I would use an OO language, but I would like to hear your opinions on the subject. Also, there's something I call "service feasibility". Web Services and SOA in general do "sound" a very nice idea, but then, on real systems they tend to be sluggish, to say the least. They can put a network on its knees if the amount of information transmitted is only fair. SAOP is a very nice idea when it comes to interoperability, but the messages are *bloated* and the system's performance tend to suffer. -- I'd love to hear your opinions on this topics.

Here’s my reply to Javier:

Within a service, OOP stays as much of a good idea as it always was, because it gives us all the qualities of pre-built infrastructure reuse that we've learned to appreciate in recent years. I don't see much realistic potential for business logic or business object reuse, but OOP as a tool is well and alive.

Your point about services being sluggish has some truth to it, if you look at system components singularly. There is no doubt that a Porsche 911 is faster than a Ford Focus. However, if you look at a larger system as a whole, to stay in the picture let's take a bridge crossing a river at rush hour, the Focus and the 911 move at the same speed because of congestion -- a congestion that would occur even if everyone driving on that bridge were driving a 911. The primary goal is thus to make that bridge wider and not to give everyone a Porsche.

Maximizing throughput always tops optimizing raw performance. The idea of SOA in conjunction with autonomous computing networks decouples subsystems in a way that you get largely independent processing islands connected by one-way roads to which you can add arbitrary numbers of lanes (and arbitrary number of identical islands). So while an individual operation may indeed take a bit longer and the bandwidth requirements may be higher, the overall system can scale its capacity and throughput to infinity.

Still, for a quick reality check: Have you looked at what size packages IIOP or DCOM produce on the wire and at the number of network roundtrips they require for protocol negotiation? The scary thing about SOAP is that it is really very much in our face and relatively easy to comprehend. Thus people tend to pay more attention to it. If you compare common binary protocols to SOAP (considering a realistic mix of payloads), SOAP doesn't look all that horrible. Also, XML compresses really well and much better than binary data. All that being said, I know that the vendors (specifically Microsoft) are looking very closely at how to reduce the wire footprint of SOAP and I expect them to come around with proposals in a not too distant future.

Over in the comment view of that article, Stu Charlton raises some concerns and posts some questions. Here are some answers:

1) "No shared application state, everything must be passed through messages."  Every "service" oriented system I have ever witnessed has stated this as a goal, and eventually someone got sick of it and implemented a form of shared state. The GIT in COM, session variables in PL/SQL packages, ASP[.NET] Sessions, JSP HttpSession, common areas in CICS, Linda/JavaSpaces, Stateful Session Beans, Scratchpads / Blackboards, etc. Concern: No distributed computing paradigm has ever eliminated transient shared state, no matter how messy or unscalable it is.

Sessions are scoped to a conversation; what I mean is application-scoped state shared across sessions. Some of the examples you give are about session state, some are about application state. Session state can’t be avoided (although it can sometimes be piggybacked into the message flow) and is owned by a particular service. If you’ve started a conversation with a service, you need to go back to that service to continue the conversation. If the service itself is implemented using a local (load balance and/or failover) cluster that’s great, but you shouldn’t need to know about it. Application state that’s shared between multiple services provided by an application leads to co-location assumptions and is therefore bad.

2) "A customer record isn't uniquely identifiable in-memory and even not an addressable on-disk entity that's known throughout the system"  -- Question: This confuses me quite a bit. Are you advocating the abolishment of a primary key for a piece of shared data? If not, what do you mean by this: no notion of global object identity (fair), or something else?

I am saying that not all data can and should be treated alike. There is shared data whose realistic frequency of change is so low, that it simply doesn’t deserve uniqueness (and be identified by a primary key in a central store). There is shared data for which a master copy exists, but of which many concurrent on-disk replicas and in-memory copies may safely float throughout the system as long as there is understanding about the temporal accuracy requirements as well as about the potential for concurrent modification. While there is always a theoretical potential for concurrent data modification, the reality of many systems is that a records in many tables can and will never be concurrently accessed, because the information causing the change does not surface at two places at the same time. How many call center agents will realistically attempt to change a single customer’s address information at the same time? Lastly, there is data that should only be touched within a transaction and can and may only exist in a single place.

I am not abandoning the idea of “primary key” or a unique customer number. I am saying that reflecting that uniqueness in in-memory state is rarely the right choice and rarely worth the hassle. Concurrent modification of data is rare and there are techniques to eliminate it in many cases and by introduction of chronologies. Even if you are booking into a financial account, you are just adding information to a uniquely identifiable set of data. You are not modifying the account itself, but you add information to it. Counter example: If you have an object that represents a physical device such as a printer, a sensor, a network switch or a manufacturing robot, in-memory identity immediately reflects the identity of the physical entity you are dealing with. These are cases where objects and object identity make sense. That direct correspondence rarely exists in business systems. Those deal with data about things, not things.

3) "In a services world, there are no objects, just data". – […] Anyway, I don't think anyone [sane] has advocated building fine-grained object model distributed systems for quite a few years. […] But the object oriented community has known that for quite some time, hence the "Facade" pattern, and the packaging/reuse principles from folks such as Robert C. Martin. Domain models may still exist in the implementation of the service, depending on the complexity of the service.

OOP is great for the inner implementation of a service (see above) and I am in line with you here. There, however, plenty of people who still believe in object purity and that’s why I am saying what I am saying.

4) "data record stored & retrieved from many different data sources within the same application out of a variety of motivations"  --- I assume all of these copies of data are read-only, with one service having responsibility for updates. I also assume you mean that some form of optimistic conflict checking would be involved to ensure no lost updates. Concern: Traditionally we have had serializable transaction isolation to protect us from concurrent anomalies. Will we still have this sort of isolation in the face of multiple cached copies across web services?

I think that absolute temporal accuracy is severely overrated and is more an engineering obsession than anything else. Amazon.com basically lies into the faces of millions of users each day by saying “only 2-4 items left in stock” or “Usually ships within 24 hours”. Can they give you to-the-second accurate information from their backend warehouse? Of course they don’t. They won’t even tell you when your stuff ships when you’re through checkout and gave them you money. They’ll do so later – by email.

I also think that the risk of concurrent updates to records is – as outlined above – very low if you segment your data along the lines of the business use cases and not so much along the lines of what a DBA thinks is perfect form.

I’ll skip 5) and 6) (the answers are “Ok” and “If you want to see it that way”) and move on to
7) "Problematic assumptions regarding single databases vs. parallel databases for scalability" -- I'm not sure what the problem is here from an SOA perspective? Isn't this a physical data architecture issue, something encapsulated by your database's interface? As far as I know it's pretty transparent to me if Oracle decides to use a parallel query, unless I dig into the SQL plan. […]

“which may or may not be directly supported by your database system” is the half sentence to consider here as well. The Oracle cluster does it, SQL Server does it too, but there are other database system out there and there’s also other ways of storing and accessing data than RDBMS.

8) "Strong contracts eliminate "illegal argument" errors" Question: What about semantic constraints? Or referential integrity constraints? XML Schemas are richer than IDL, but they still don't capture rich semantic constraints (i.e. "book a room in this hotel, ensuring there are no overlapping reservations" -- or "employee reporting relationships must be hierarchical"). […]

“Book a room in this hotel” is a message to the service. The requirements-motivated answer to this message is either “yes” or “no”. “No overlapping reservations” is a local concern of that service and even “Sorry, we don’t know that hotel” is. The employee reporting relationships for a message relayed to an HR service can indeed be expressed by referential constraints in XSD, the validity of the merging the message into the backend store is an internal concern of the service. The answer is “can do that” or “can’t do that”.

What you won’t get are failures like “the employee name has more than 80 characters and we don’t know how to deal with that”. Stronger contracts and automatic enforcement of these contracts reduce the number of stupid errors, side-effects and the combination of stupid errors and side effects to look for – at either endpoint.

9) "The vision of Web services as an integration tool of global scale exhibits these and other constraints, making it necessary to enable asynchronous behavior and parallel processing as a core principle of mainstream application design and don’t leave that as a specialty to the high-performance and super-computing space."  -- Concern: Distributed/concurrent/parallel computing is hard. I haven't seen much evidence that SOA/ web services makes this any easier. It makes contracts easier, and distributing data types easier. But it's up to the programming model (.NET, J2EE, or something else) to make the distributed/concurrent/parallel model easier. There are some signs of improvement here, but I'm skeptical there will be anything that breaks this stuff into the "mainstream" (I guess it depends on what one defines as mainstream)...

Oh, I wouldn’t be too sure about that. There are lots of thing going on in that area that I know of but can’t talk about at present.

While SOA as a means of widespread systems integration is a solid idea, the dream of service-oriented "grid" computing isn't really economically viable unless the computation is very expensive. Co-locating processing & filtering as close as possible to the data source is still the key principle to an economic & performing system. (Jim Gray also has a recent paper on this on his website). Things like XQuery for integration and data federations (service oriented or not) still don't seem economically plausible until distributed query processors get a lot smarter and WAN costs go down.

Again, if the tools were up to speed, it would be economically feasible to do so. That’s going to be fixed. Even SOA based grids apparently sound much less like science fiction to me than to you.

Categories: Architecture | IT Strategy

September 25, 2003
@ 03:27 PM
If you are a developer and don't live in the Netherlands (where SOA stands, well known, for "Sexueel Overdraagbare Aandoeningen" = "Sexually transmitted diseases”), you may have heard by now that SOA stands for "service oriented architectures". In this article I am thinking aloud about what "services" mean in SOA.
Categories: Architecture | IT Strategy

Bob Cancilla’s CNet article is so full of FUD that I can’t help but making a few more comments and post a few questions. Unfortunately, his email address isn’t mentioned near the article … therefore I have to blog it. Mr. Cancilla, feel free to use the comments feature here, if you find this…

Unlike IBM, Microsoft falls short when it comes to helping customers use standards in a productive, cost-effective way. […] Sure, both companies have worked closely to develop and promote a sizable number of important industry standards that will continue to have a big impact on the way business is conducted in the foreseeable future. But cool specs are meaningless to the IT people who must actually assemble all those standards into real business solutions. That's where the rubber meets the road for Web services. Redmond's approach to Web services is a dead-end of closed, Windows-only systems that lock customers into a single computing model. Customers don't have the freedom to choose the best hardware or operating system. Where does that leave the millions of users who rely on non-Microsoft platforms such as mainframes, Unix or Linux?

First of all, Mr. Cancilla, you haven’t understood Web Services, at all. Web Services are about connecting systems, irrespective of operating system, application platform or programming model. Redmond’s approach to web services is just like IBM’s and BEA’s and Sun’s and Oracle’s approach to Web Services. All of them think they have a superior application platform and their embrace of Web Services serves to make that platform the hub of communication for their and all other systems that are (for them: unfortunately) running on other platforms in the reality of today’s heterogeneous IT landscape. It’s about opening services for access by other platforms. I wish I would know how you get the idea that “lock-in” and “Web Services” belong in the same sentence?

Secondly, show me an environment that enables the average programmer to be more productive and hence more cost effective when developing XML and Web Services solutions than Microsoft’s Visual Studio .NET – to a degree that it backs up your “falls short” claim.

Third, I wonder how someone who has dedicated his career to one of the most monopolistic, locked-down and proprietary platforms in existence, that is IBM’s midrange and mainframe platforms, feels qualified to discredit Microsoft for their platform strategy. In fact, I can run Windows on pretty much any AMD and Intel-based server or desktop from any vendor – how’s that with your AS/400 and mainframe apps?

Ultimately, .Net defeats the purpose of open standards because Microsoft products are open only as long as you develop applications on the Windows platform. To me, this doesn't say open, it says welcome to yet another Microsoft environment that is anything but open.

Likewise, IBM’s full Web Services stack is only open as long as you write applications for their WebSphere environment. WebSphere is IBM’s application server and Microsoft’s application server is Windows Server 2003. Every vendor who makes money from software tries to build a superior platform, resulting in features that aren’t covered by standards and therefore cause vendor lock-in. That’s a direct result from market economy. However, this still doesn’t have anything to do with Web Services, because these are “on-the-wire” XML message exchange standards that primarily exist for the purpose of cross-platform interaction.

Proprietary environments deny businesses the flexibility to chose best-of-breed solutions that are fine-tuned to their industry's unique environment.

… like OS/400 and OS/390 ?

Additionally, Microsoft's claim that .Net's Web services platform saves customers money is misleading. Sure, the initial investment is enticing, but how much will it cost when the hard work begins? A recent Gartner report said companies planning to move their old programs to .Net can expect to pay 40 percent to 60 percent of the cost of developing the programs in the first place.

A recent discussion with my 9 year old niece has shown that moving “old programs” from anywhere to anywhere isn’t free and that anyone who’d make that claim shouldn’t be working in this industry.

Building your company's Web services platform on .Net is fine if you don't mind throwing away decades of investment in existing applications. For instance, on any given day, businesses use CICS systems to process about 30 billion transactions, to the tune of $1 trillion. They can't afford to rip out that kind of processing power. Instead, they're looking for ways to exploit it within other applications. But if they were to buy into .Net, they'd better be prepared to stack it on the shelf because Microsoft's Host Integration Server provides limited access to CICS on mainframes.

Ok … here we have it. CICS is a lock-in, proprietary IBM product, right? So, what’s better than Host Integration Server? I suspect it’s an IBM product, correct? So if you were to replace that all so powerful IBM mainframe with any other technology (including Linux), of course using a different approach to architecture (which is entirely possible), how would you have not to throw away that investment?

What seems to be promoted here is “stay with IBM , use their stack”. I have all respect for the power of the IBM mainframe platforms, but using “openness” as an argument in this context and for the conclusions the author is making, is nothing less than perverse.

Categories: IT Strategy

August 6, 2003
@ 09:05 AM

According to InfoWorld, SCO wants to have $699 per processor from everyone who runs Linux on a server, as an “introductory” offer and will ask $1399 after Oct 15 for a single-processor license.

That’s pretty stunning. Even if SCO were right with their allegations that there is SCO-owned IP in Linux (and I am not in the position to make any statement about the rightfulness of that claim), $699 per processor is still well above and beyond what would be their fair share – that pricing scheme looks more like “this is our OS, including everything that ships with it” and that seems just plainly wrong.

This reminds me of a similar high-profile case: the Unisys GIF license. They hold/held the LZW patent and that’s used in GIF. Of course, lots of developers found out after they’ve implemented and shipped GIF support and Unisys probably waited for a critical mass to accumulate before they started enforcing licensing fees for that patent in early 1995. (Side note from the Unisys site: The U.S. LZW patent expires June 20, 2003, the counterpart Canadian patent expires July 7, 2004, the counterpart patents in the United Kingdom, France, Germany and Italy expire June 18, 2004, and the Japanese counterpart patents expire June 20, 2004.). While Unisys certainly didn’t make too many friends with that move, they were initially asking something like ten cents to at most ten dollars per distributed copy of any software package using the GIF format, if I remember right (the pricing information seems to have vanished). I would think that looking at the SCO case, this all of a sudden becomes very fair and reasonable.

[Still, the web-site fee of $5000 per site if you were/are using unlicensed (as in: Unisys’ LZW patent licensing) software to create your GIFs, remains outrageous]

Categories: IT Strategy

July 21, 2003
@ 04:02 PM
Stephen Forte is a big fan of Munich, except for their Linux strategy; and I do agree on the latter -- for some obvious and some not so obvious reasons.
Categories: IT Strategy