I've spent the last 1 1/2 weeks doing one of the most fun (seriously) work assignments that each Program Manager of our team gets to do every once in a while: Servicing. So until yesterday night (I'm flying home to Germany today) I was in charge of ASP.NET Web Services and Remoting. An even though these technologies have been out there for quite a while now, there are still situations where stuff breaks and people are scratching their heads wondering what's going on. Overall, it was a very, very quiet time on the bug front though.

The one issue that we found on my watch is that you can configure ASP.NET Web Forms in a way that it breaks ASP.NET Web Services (ASMX). We are shipping one ASP.NET Web Page (.aspx) with ASMX and that unfortunate interaction manages to break that exact page with an error that's hard to figure out unless you have substantial ASP.NET knowledge and you have enough confidence in that knowledge to not trust us ;-)

If you globally override the autoEventWireup setting in the <page/> config element in the ASP.NET web.config and set that to "false", the DefaultWsdlHelpGenator.aspx page (which sits in the CONFIG directory of the Framework) becomes very unhappy and fails with a NullReferenceException, stating "Object reference not set to an instance of an object." and showing you some code that's definitely not yours.

What happened? Well, the file is missing a directive that overrides the override of the default. The fix is to go edit the DefaultWsdlHelpGenerator.aspx file and add the line:

<%@ Page AutoEventWireup="true" %>

That will fix the problem.

Now, the big question is: "Will you put that into a service pack?". While there's obviously a bug here, the answer is, in this particular case, "don't know yet". Replacing or editing that particular file is a potentially very impactful surgery done on the patched system given that the file is there in source code and in the config directory because you are supposed to be able to change it. Could we touch changed files? Probably not. Could we touch unchanged files? Probably? So how would you surface the difference and make sure that the systems we couldn't patch would not suffer from the particular bug? What's the test impact for the code and for the service pack or patch installer? How many people are actually using that ASP.NET config directive AND are hosting ASMX services in the same application and/or scope? Is it actually worth doing that? Making changes in code that has already shipped and is part of the Framework is serious business, since you are potentially altering the behavior of millions of machines all at once. So that part is definitely not done in an "agile" way, but takes quite a bit of consideration, while it takes just 10 seconds and notepad.exe for you.

Categories: ASP.NET | Web Services

I was sad when "Indigo" and "Avalon" went away. It'd be great if we'd have a pool of cool legal-approved code-names for which we own the trademark rights and which we could stick to. Think Delphi or Safari. "Indigo" was cool insofar as it was very handy to refer to the technology set, but was removed far enough from the specifics that it doesn't create a sharply defined, product-like island within the larger managed-code landscape or has legacy connotations like "ADO.NET".  Also, my talks these days could be 10 minutes shorter if I could refer to Indigo instead of "Windows Communications Foundation". Likewise, my job title wouldn't have to have a line wrap on the business card of I ever spelled it out in full.

However, when I learned about the WinFX name going away (several weeks before the public announcement) and the new "Vista Wave" technologies (WPF/WF/WCF/WCS) being rolled up under the .NET Framework brand, I was quite happy. Ever since it became clear in 2004 that the grand plan to put a complete, covers-all-and-everything managed API on top (and on quite a bit of the bottom) of everything Windows would have to wait until siginificantly after Vista and that therefore the Win16>Win32>WinFX continuity would not tell the true story, that name made only limited sense to stick to. The .NET Framework is the #1 choice for business applications and a well established brand. People refer to themselves as being "dotnet" developers. But even though the .NET Framework covers a lot of ground and "Indigo", "Avalon", "InfoCard", and "Workflow" are overwhelmingly (or exclusively) managed-code based, there are still quite a few things in Windows Vista that still require using P/Invoke or COM/Interop from managed code or unmanaged code outright. That's not a problem. Something has to manage the managed code and there's no urgent need to rewrite entire subsystems to managed code if you only want to add or revise features. 

So now all the new stuff is now part of the .NET Framework. That is a good, good, good change. This says what it all is.

Admittedly confusing is the "3.0" bit. What we'll ship is a Framework 3.0 that rides on top of the 2.0 CLR and includes the 2.0 versions of the Base-Class Library, Windows Forms, and ASP.NET. It doesn't include the formerly-announced-as-to-be-part-of-3.0 technologies like VB9 (there you have the version number consistency flying out the window outright), C# 3.0, and LINQ. Personally, I think that it might be a tiny bit less confusing if the Framework had a version-number neutral name such as ".NET Framework 2006" which would allow doing what we do now with less potential for confusion, but only a tiny bit. Certainly not enough to stage a war over "2006" vs. "3.0".

It's a matter of project management reality and also one of platform predictability that the ASP.NET, or Windows Forms teams do not and should not ship a full major-version revision of their bits every year. They shipped Whidbey (2.0) in late 2005 and hence it's healthy for them to have boarded the scheduled-to-arrive-in-2007 boat heading to Orcas. We (the "WinFX" teams) subscribed to the Vista ship docking later this year and we bring great innovation which will be preinstalled on every copy of it. LINQ as well as VB9 and C# incorporating it on a language-level are very obviously Visual Studio bound and hence they are on the Orcas ferry as well. The .NET Framework is a steadily growing development platform that spans technologies from the Developer Division, Connected Systems, Windows Server, Windows Client, SQL Server, and other groups, and my gut feeling is that it will become the norm that it will be extended off-cycle from the Developer Division's Visual Studio and CLR releases. Whenever a big ship docks in the port, may it be Office, SQL, BizTalk, Windows Server, or Windows Client, and as more and more of the still-unmanaged Win32/Win64 surface area gets wrapped, augmented or replaced by managed-code APIs over time and entirely new things are added, there might be bits that fit into and update the Framework.  

So one sane way to think about the .NET Framework version number is that it merely labels the overall package and not the individual assemblies and components included within it. Up to 2.0 everything was pretty synchronized, but given the ever-increasing scale of the thing, it's good to think of that being a lucky (even if intended) coindicence of scheduling. This surely isn't the first time that packages were versioned independently of their components. There was and is no reason for the ASP.NET team to gratuitously recompile their existing bits with a new version number just to have the GAC look pretty and to create the illusion that everything is new - and to break Visual Studio compatibility in the process.

Of course, once we cover 100% of the Win32 surface area, we can rename it all into WinFX again ;-)  (just kidding)

[All the usual "personal opinion" disclaimers apply to this post]

Update: Removed reference to "Win64".

Categories: IT Strategy | Technology | ASP.NET | Avalon | CLR | Indigo | Longhorn | WCF | Windows

December 11, 2004
@ 02:35 PM

The stack trace below (snapshot taken at a breakpoint in [WebMethod] "HelloWorld") shows that I am having quite a bit of programming fun these days. Server-side ASP.NET hooked up to a MSMQ listener.  

simpleservicerequestinweb.dll!SimpleServiceRequestInWeb.Hello.HelloWorld() Line 53 C#
system.web.services.dll!System.Web.Services.Protocols.LogicalMethodInfo.Invoke(System.Object target, System.Object[] values) + 0x92 bytes 
system.web.services.dll!System.Web.Services.Protocols.WebServiceHandler.Invoke() + 0x9e bytes 
system.web.services.dll!System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest() + 0x142 bytes 
system.web.services.dll!System.Web.Services.Protocols.SyncSessionlessHandler.ProcessRequest(System.Web.HttpContext context) + 0x6 bytes 
system.web.dll!CallHandlerExecutionStep.System.Web.HttpApplication+IExecutionStep.Execute() + 0xb4 bytes 
system.web.dll!System.Web.HttpApplication.ExecuteStep(System.Web.HttpApplication.IExecutionStep step, bool completedSynchronously) + 0x58 bytes 
system.web.dll!System.Web.HttpApplication.ResumeSteps(System.Exception error) + 0xfa bytes 
system.web.dll!System.Web.HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(System.Web.HttpContext context, System.AsyncCallback cb, System.Object extraData) + 0xe3 bytes 
system.web.dll!System.Web.HttpRuntime.ProcessRequestInternal(System.Web.HttpWorkerRequest wr) + 0x1e7 bytes 
system.web.dll!System.Web.HttpRuntime.ProcessRequest(System.Web.HttpWorkerRequest wr) + 0xb0 bytes 
newtelligence.enterprisetools.dll!newtelligence.EnterpriseTools.Msmq.MessageQueueAsmxDispatcher.MessageReceived(System.Object sender = {newtelligence.EnterpriseTools.Msmq.MessageQueueListener}, newtelligence.EnterpriseTools.Msmq.MessageReceivedEventArgs ea = {newtelligence.EnterpriseTools.Msmq.MessageReceivedEventArgs}) Line 33 C#
newtelligence.enterprisetools.dll!newtelligence.EnterpriseTools.Msmq.MessageQueueListener.ReceiveLoop() Line 305 + 0x2b bytes C#

Categories: ASP.NET | MSMQ | Web Services

I was a little off when I compared my problem here to a tail call. Gordon Weakliem corrected me with the term "continuation".

The fact that the post got 28 comments shows that this seems to be an interesting problem and, naming aside, it is indeed a tricky thing to implement in a framework when the programming language you use (C# in my case) doesn't support the construct. What's specifically tricky about the concrete case that I have is that I don't know where I am yielding control to at the time when I make the respective call.

I'll recap. Assume there is the following call

CustomerService cs = new CustomerService();
cs.FindCustomer(customerId);

FindCustomer is a call that will not return any result as a return value. Instead, the invoked service comes back into the caller's program at some completely different place such this:

[WebMethod]
public void
FindCustomerReply(Customer[] result)
{
   ...
}

So what we have here is a "duplex" conversation. The result of an operation initiated by an outbound message (call) is received, some time later, through an inbound message (call), but not on the same thread and not on the same "object". You could say that this is a callback, but that's not precisely what it is, because a "callback" usually happens while the initiating call (as above FindCustomer) has not yet returned back to its scope or at least while the initiating object (or an object passed by some sort of reference) is still alive. Here, instead, processing of the FindCustomer call may take a while and the initiating thread and the initiating object may be long gone when the answer is ready.

Now, the additional issue I have is that at the time when the FindCustomer call is made, it is not known what "FindCustomerReply" message handler it going to be processing the result and it is really not know what's happening next. The decision about what happens next and which handler is chosen is dependent on several factors, including the time that it takes to receive the result. If the FindCustomer is called from a web-page and the service providing FindCustomer drops a result at the caller's doorstep within 2-3 seconds [1], the FindCustomerReply handler can go and hijack the initial call's thread (and HTTP context) and render a page showing the result. If the reply takes longer, the web-page (the caller) may lose its patience [2] and choose to continue by rendering a page that says "We are sending the result to your email account." and the message handler with not throw HTML into an HTTP response on an open socket, but rather render it to an email and send it via SMTP and maybe even alert the user through his/her Instant Messenger when/if the result arrives.

[1] HTTP Request => FindCustomer() =?> "FindCustomerReply" => yield to CustomerList.aspx => HTTP Response
[2] HTTP Request => FindCustomer() =?> Timeout!            => yield to YouWillGetMail.aspx => HTTP Response
                               T+n =?> "FindCustomerReply" => SMTP Mail
                                                           => IM Notification

So, in case [1] I need to correlate the reply with the request and continue processing on the original thread. In case [2], the original thread continues on a "default path" without an available reply and the reply is processed on (possibly two) independent threads and using two different notification channels.

A slightly different angle. Consider a workflow application environment in a bank, where users are assigned tasks and simply fetch the next thing from the to-do list (by clicking a link in an HTML-rendered list). The reply that results from "LookupAndDoNextTask" is a message that contains the job that the user is supposed to do.  

[1] HTTP Request => LookupAndDoNextTask() =?> Job: "Call Customer" => yield to CallCustomer.aspx => HTTP Response
[2] HTTP Request => LookupAndDoNextTask() =?> Job: "Review Credit Offer" => yield to ReviewCredit.aspx => HTTP Response
[3] HTTP Request => LookupAndDoNextTask() =?> Job: "Approve Mortgage" => yield to ApproveMortgage.aspx => HTTP Response
[4] HTTP Request => LookupAndDoNextTask() =?> No Job / Timeout => yield to Solitaire.aspx => HTTP Response

In all of these cases, calls to "FindCustomer()" and "LookupAndDoTask()" that are made from the code that deals with the incoming request will (at least in the theoretical model) never return to their caller and the thread will continue to execute in a different context that is "TBD" at the time of the call. By the time the call stack is unwound and the initiating call (like FindCustomer) indeed returns, the request is therefore fully processed and the caller may not perform any further actions. 

So the issue at hand is to make that fact clear in the programming model. In ASP.NET, there is a single construct called "Server.Transfer()" for that sort of continuation, but it's very specific to ASP.NET and requires that the caller knows where you want to yield control to. In the case I have here, the caller knows that it is surrendering the thread to some other handler, but it doesn't know to to whom, because this is dynamically determined by the underlying frameworks. All that's visible and should be visible in the code is a "normal" method call.

cs.FindCustomer(customerId) might therefore not be a good name, because it looks "too normal". And of course I don't have the powers to invent a new statement for the C# language like continue(cs.FindCustomer(customerId)) that would result in a continuation that simply doesn't return to the call location. Since I can't do that, there has to be a different way to flag it. Sure, I could put an attribute on the method, but Intellisense wouldn't show that, would it? So it seems the best way is to have a convention of prefixing the method name.

There were a bunch of ideas in the comments for method-name prefixes. Here is a selection:

  • cs.InitiateFindCustomer(customerId)
  • cs.YieldFindCustomer(customerId)
  • cs.YieldToFindCustomer(customerId)
  • cs.InjectFindCustomer(customerId)
  • cs.PlaceRequestFindCustomer(customerId)
  • cs.PostRequestFindCustomer(customerId)

I've got most of the underlying correlation and dispatch infrastructure sitting here, but finding a good programming model for that sort of behavior is quite difficult.

[Of course, this post won't make it on Microsoft Watch, eWeek or The Register]

Categories: Architecture | SOA | Technology | ASP.NET | CLR

Microsoft urgently needs to consolidate all the APIs that are required for provisioning services or sites. The amount of knowledge you need to have and the number APIs you need to use in order to lock down a Web service or Enterprise Services application programmatically at installation time in order to have it run under an isolated user account (with a choice of local or domain account) that has the precise rights to do what it needs to do (but nothing else) is absolutely insane. 

You need to set ACLs on the file system and the registry, you need to modify the local machine's security policy, you need to create accounts and add them to local groups, you must adhere to password policies with your auto-generated passwords, you need to conbfigure identities on Enterprise Services applications and IIS application pools, you need to set ACLs on Message Queues (if you use them), and you need to write WS-Policy documents to secure your WS front. Every single of these tasks uses a different API (and writing policies has none) and most of these jobs require explicit Win32 or COM interop. I have a complete wrapper for that functionality for my app now (which took way too long to write), but that really needs to be fixed on a platform level.

Categories: Technology | ASP.NET | Enterprise Services

June 2, 2004
@ 08:46 AM

Ted Neward has a crusade against DataSets going on on his blog. At this point in time, I really only ever use them inside a service and only at times when I am horribly lazy or when I code under the influence. Otherwise I just go through the rather quick and mostly painless process of mapping plain data structures (generated from schema) to and from stored procedure calls myself. More control, more interoperability, less weight. I really like when my code precisely states how my app interacts with one of the most important components: the data store.

I don't even use DataSets on ASP.NET web pages anymore. The data binding logic allows to bind against anything and if I have a public or protected property "Customer" on my page class that is a data structure, I can simply have an expression like <%# Customer.Name %> on my page and all is good. Likewise, a DataGrid happily binds against anything that is an ICollection (Array, ArrayList, ...) and the DataGridItem.DataItem property will then contain the individual element.  It's just that the design-time support in VS.NET is very DataSet focused and messes things up when you click the wrong things. 

DataSets are really cool for Windows Forms apps. By now I've reached a point where I simply conclude that the DataSet class should be banned from the server-side.

Categories: Technology | ASP.NET

A good deal of yesterday and some of this morning I've been fiddling around with nested ASP.NET DataGrids. Binding nested grids is pretty easy and they show all you want, but editing items in a nested grid just doesn't work as easy as editing in a simple Grid. In fact, it doesn't work at all. What happens is that you can put a nested grid into edit mode, but you never seem to be able to catch any Update/Cancel events from the edited item.

I tried to look for a solution by asking Google, but the answers that I found were very unsatisfactory, since there was no explanation on why exactly it doesn't work. So, here's why ... and it's very, very simple: Nested DataGrids lose all of their ViewState on any roundtrip. That seems to be some sort of problem that's actually related to how the entire TemplateControl infrastructure works, but that's what it is.

Since that's the case, the EditItemIndex isn't preserved across the roundtrip and the DataGrid doesn't know how to dispatch the Update event. Now, how do I work around it? Again, pretty simple: You need to store the EditItemIndex (and SelectedItemIndex, etc.) of the nested data-grid in the Page's ViewState whenever they change (Edit event, Cancel event, etc.), keyed by the UniqueID of the DataGrid and a matching suffix. When you reload the sub-grid on a roundtrip, recover the value(s) from the ViewState of the page and DataBind(). 

I've put the workaround into my current working copy for dasBlog (the OPML editor gets a hierarchical editor now) and it works great. Next build that gets released, you can look at it.

 

 

Categories: Technology | ASP.NET

July 21, 2003
@ 10:10 PM
A quick overview about my changes to BlogX and why it just isn't BlogX anymore. And, yes, you'll get it.
Categories: Blog | Technology | ASP.NET

BloggerAPI, MT API, MetaWeblog API, Comment API, Pingback API, Trackback  ...  are you nuts?

I must admit that until last week I didn't really pay much close attention to all the blogging related APIs and specs beyond "keeping myself informed". Today I copied my weekend's work over to this server and now I have all of them implemented as client and server versions. Sam's and Mark's validator is happy with my RSS 2.0 feed and the experimental Atom (Pie/Echo) feed.

I have to say ... the state of affairs in this space is absolutely scary. Most of the specs, especially for the APIs are lacking proper information detail, are often too informal with too much room for ambiguities and you need to be lucky to find a reasonably recent one. Sam laments that people don't read specs carefully and I agree, but I would argue that the specs need to be written carefully, too. It also seems that because the documentation on expected behavior is so thin, everybody implements their own flavor and extensions and not only do the APIs have huge overlap, but it seems like any random selection of offline blogging tools will use its own arbitrary selection of these APIs in any random order. Since my implementation didn't "grow" over time, but I implemented it all in one shot essentially only since last Thursday and had to look at this all at once and what I found was just saddening. All of this has to be consolidated and it will be.

I am all for the Atom project and creating a consolidated, SOAP-based API for all blogging functions that the aforementioned APIs offer. XML-RPC was a good thing to start with but its time is up.  I am also for replacing RSS x.x with a spec that's open and under the umbrella of a recognized standards body and not of a law school, that's XML as of ca. 2003 and not as of ca. 1998, and that's formally documented (with a proper schema). What's there right now smells all like "let's hack something up" and not very much like serious software engineering. Ok, it's proven that it all works, but how about dumping the prototypes now?

 

Categories: Blog | Technology | ASP.NET | Weblogs | Atom

July 19, 2003
@ 08:39 AM

This morning I got up early (I going to be picked to play Paintball in an hour or so) and implemented image and attachment uploads for the blogging site. This is the test for the live site.

[Here's a copy of the SoapExtension Wizard for Visual Studio.NET: ASPNETSoapExtensionWizard.zip (53.82 KB)]

 

Categories: Technology | ASP.NET | Blog

July 18, 2003
@ 08:28 PM

Productivity and ASP.NET

It took me less than an hour to implement, test and deploy pingback support for this blog here using ASP.NET and XML-RPC.NET (and that includes reading the spec). Yesterday and today, it took me less than 2hrs total (including addressing two comments/suggestions/corrections from Sam Ruby) to get (n)echo/pie/atom support working so that it can be validated.

Categories: Technology | ASP.NET

A little IHttpModule implementation for ASP.NET that maps between URLs using regular expressions. In use here.
Categories: Technology | ASP.NET | Blog