Strive for one call per user action

So while working on my assigned project at work, I observed some lag populating some drop downs on a screen at I was on. Immediately, I suspected that the drop downs were being loaded from asynchronous calls performed after the screen had loaded. So I fired up Firebug, and sure enough, there they were. 3 calls were being made to the server to get 3 different sets of data to use to populate 3 drop downs. Despite the calls being small and relatively quick (28-49ms each), the effect was still a visible lag long enough that I was able to open one of the drop downs before it had finished being populated.

Now some might say, so what?  Big deal. Why fuss over 49ms? Well, despite the very visible lag, this is just a symptom of a problem that could very well escalate should more drop downs of this nature get added to the screen. Or if this “pattern” is replicated on other more complicated screens it could lead to user frustration and possible inefficient use of resources, especially if the app is stateless and requires authentication.

I’ve followed a guideline, almost a rule, over the past 4 years that I have been building service oriented single page apps, and that is: perform no more than one call per user action. A user initiated action is nothing more than a simple use case. And simple use cases should define a contract that specifies what must be submitted and what must be provided if the preconditions are met. To me, these extra calls are being made because the use case requirements were not fully implemented into one service call. Think of the activity flow or sequence diagram one might draw for a given use case. In its simplest form, the user actor initiates some action with the system and the system responds.  Now, I’m not talking about calls for images or other visual elements.  When I refer to calls, I’m talking about service method invocations.  This by and large is why I take issue with using (non-pragmatic) RESTful services with thier full HATEOS driven implementations as the model for building services for client side applications, but that’s for another post on another day.

Another concern that I have when I see chatty applications that engage in this type of behavior is that it indicates to me that the client application is more familiar with the intimate details of the middleware than it probably should be. This could lead to unintended exposure of pieces of the system that should otherwise be encapsulated. This could expose security concerns and allow new unexpected permutations of workflows and interactions to occur with those functions.

Now it may seem that I am arguing against reusability here, but I assure you that it’s quite the opposite. One could still potentially have those same fine grain functions, but just encapsulate them as implementation detail code that is reused by more coarse grain use case specific methods. In fact, by doing this you will likely find that some of the functionality that resides in the front end code might make more sense being encapsulated in the use case service method code on the server. This code would then be automatically reused if one were to build an alternative front end that leveraged the same service methods.  You might even reduce the size of the amount of data that you are sending over the wire, especially if some of it is going to be filtered out or is just used to help process or relate the data.

For example, let’s say that we organize products by manufacturers.  And let’s say that we have two drop down lists to filter with, one that is a list of manufacturers and the other a list of products.  If we obtain these two lists separately and then drive the contents displayed in the product list based on the selected manufacturer, then we are writing code on the front end to handle that processing.  Additionally, we likely have a manufacturer id or some sort of code, possibly the name, on the product records returned that associate the products with the manufacturers.  An alternative approach to this would have been to send down a dictionary with the manufacturer names as keys and object values containing a list of their products already broken out to each manufacturer.  The processing of the two lists would have been completed on the server and now, an alternative front end would not have to have the same logic repeated.  Consider the following:

[
    "Acme",
    "Good Company",
    ...,
    "Sears"
]
and 
{
    "Anvil":            "Acme",
    "Dynamite":         "Acme",
    "Paint on hole":    "Acme",
    "Bandage":          "Good Company",
    ...,
    "Garden Hose" :     "Sears"
}

vs.

{
    "Acme":         {products: ["Anvil", "Dynamite", "Paint on hole"]},
    "Good Company": {products: ["Bandage"]},
    ...,
    "Sears":        {products: ["Garden Hose"]}
}

The code on the front end can now simply rebind the product list control with the products property of the company object bound to the item selected in the company list control.  With the two separate lists, the front end code would have to scan through the list of products to locate the related products to bind to the products list control when the selected item in the company list is changed.  Consider also that there may be even more complicated rules that might affect those lists such as the user’s location for availability or a combination of other factors and it becomes easier to see why we might want to move that concern to a more centralized location.

I’m interested in other opinions on this topic.  What do you think?  Should we strive to make our front-ends more or less pretty and dumb or should we move more processing logic to the front and have finer grain access to resources from the middle ware?  Please leave your thoughts in the comments below.