Generics in .Net – Part 5 – Generic Namespaces

So, it’s been awhile since I’ve published a post on this site.  Today I found myself answering a stack overflow post with essentially what I had planned on covering for this part of my series on generics in .net.  So I figured that I would post the same content here.  As a result, this 5th part is being publish out of order and before parts 2-4.  Hopefully, it won’t be another year before I get around to posting those parts.  So with that said, here is the post.  Note that some of the concepts employed in this post will be covered in parts 2-4.

For those who work with multiple generic classes that share the same generic type parameters, the ability to declare a generic namespace would be extremely useful.  Unfortunately, .Net (or at least C#) does not support the idea of generic namespaces.  So in order to accomplish the same goal, we can use generic classes to fulfill the same goal.  Take the following example classes related to a logical entity:

public  class       BaseDataObject
                    <
                        tDataObject, 
                        tDataObjectList, 
                        tBusiness, 
                        tDataAccess
                    >
        where       tDataObject     : BaseDataObject<tDataObject, tDataObjectList, tBusiness, tDataAccess>
        where       tDataObjectList : BaseDataObjectList<tDataObject, tDataObjectList, tBusiness, tDataAccess>, new()
        where       tBusiness       : IBaseBusiness<tDataObject, tDataObjectList, tBusiness, tDataAccess>
        where       tDataAccess     : IBaseDataAccess<tDataObject, tDataObjectList, tBusiness, tDataAccess>
{
}

public  class       BaseDataObjectList
                    <
                        tDataObject, 
                        tDataObjectList, 
                        tBusiness, 
                        tDataAccess
                    >
:   
                    CollectionBase<tDataObject>
        where       tDataObject     : BaseDataObject<tDataObject, tDataObjectList, tBusiness, tDataAccess>
        where       tDataObjectList : BaseDataObjectList<tDataObject, tDataObjectList, tBusiness, tDataAccess>, new()
        where       tBusiness       : IBaseBusiness<tDataObject, tDataObjectList, tBusiness, tDataAccess>
        where       tDataAccess     : IBaseDataAccess<tDataObject, tDataObjectList, tBusiness, tDataAccess>
{
}

public  interface   IBaseBusiness
                    <
                        tDataObject, 
                        tDataObjectList, 
                        tBusiness, 
                        tDataAccess
                    >
        where       tDataObject     : BaseDataObject<tDataObject, tDataObjectList, tBusiness, tDataAccess>
        where       tDataObjectList : BaseDataObjectList<tDataObject, tDataObjectList, tBusiness, tDataAccess>, new()
        where       tBusiness       : IBaseBusiness<tDataObject, tDataObjectList, tBusiness, tDataAccess>
        where       tDataAccess     : IBaseDataAccess<tDataObject, tDataObjectList, tBusiness, tDataAccess>
{
}

public  interface   IBaseDataAccess
                    <
                        tDataObject, 
                        tDataObjectList, 
                        tBusiness, 
                        tDataAccess
                    >
        where       tDataObject     : BaseDataObject<tDataObject, tDataObjectList, tBusiness, tDataAccess>
        where       tDataObjectList : BaseDataObjectList<tDataObject, tDataObjectList, tBusiness, tDataAccess>, new()
        where       tBusiness       : IBaseBusiness<tDataObject, tDataObjectList, tBusiness, tDataAccess>
        where       tDataAccess     : IBaseDataAccess<tDataObject, tDataObjectList, tBusiness, tDataAccess>
{
}

We can simplify the signatures of these classes by using a generic namespace (implemented via nested classes):

    public
    partial class   Entity
                    <
                        tDataObject, 
                        tDataObjectList, 
                        tBusiness, 
                        tDataAccess
                    >
            where   tDataObject     : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.BaseDataObject
            where   tDataObjectList : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.BaseDataObjectList, new()
            where   tBusiness       : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.IBaseBusiness
            where   tDataAccess     : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.IBaseDataAccess
    {

        public  class       BaseDataObject {}

        public  class       BaseDataObjectList : CollectionBase<tDataObject> {}
        
        public  interface   IBaseBusiness {}
        
        public  interface   IBaseDataAccess {}

    }

 

Then, through the use of partial classes you can separate the classes into separate nested files.  I recommend using a Visual Studio extension like NestIn to support nesting the partial class files.  This allows the “namespace” class files to also be used to organize the nested class files in a folder like way.

For example:

Entity.cs

    public
    partial class   Entity
                    <
                        tDataObject, 
                        tDataObjectList, 
                        tBusiness, 
                        tDataAccess
                    >
            where   tDataObject     : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.BaseDataObject
            where   tDataObjectList : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.BaseDataObjectList, new()
            where   tBusiness       : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.IBaseBusiness
            where   tDataAccess     : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.IBaseDataAccess
    {
    }

Entity.BaseDataObject.cs

    partial class   Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>
    {

        public  class   BaseDataObject
        {

            public  DataTimeOffset  CreatedDateTime     { get; set; }
            public  Guid            CreatedById         { get; set; }
            public  Guid            Id                  { get; set; }
            public  DataTimeOffset  LastUpdateDateTime  { get; set; }
            public  Guid            LastUpdatedById     { get; set; }

            public
            static
            implicit    operator    tDataObjectList(DataObject dataObject)
            {
                var returnList  = new tDataObjectList();
                returnList.Add((tDataObject) this);
                return returnList;
            }

        }
        
    }

Entity.BaseDataObjectList.cs

    partial class   Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>
    {

        public  class   BaseDataObjectList : CollectionBase<tDataObject>
        {

            public  tDataObjectList ShallowClone() 
            {
                var returnList  = new tDataObjectList();
                returnList.AddRange(this);
                return returnList;
            }
        
        }

    }

Entity.IBaseBusiness.cs

    partial class   Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>
    {

        public  interface   IBaseBusiness
        {
            tDataObjectList Load();
            void            Delete();
            void            Save(tDataObjectList data);
        }

    }

Entity.IBaseDataAccess.cs

    partial class   Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>
    {

        public  interface   IBaseDataAccess
        {
            tDataObjectList Load();
            void            Delete();
            void            Save(tDataObjectList data);
        }

    }

The files in the visual studio solution explorer would then be organized as such:

    Entity.cs
    +   Entity.BaseDataObject.cs
    +   Entity.BaseDataObjectList.cs
    +   Entity.IBaseBusiness.cs
    +   Entity.IBaseDataAccess.cs

And you would implement the generic namespace like the following:

User.cs

    public
    partial class   User
    :
                    Entity
                    <
                        User.DataObject, 
                        User.DataObjectList, 
                        User.IBusiness, 
                        User.IDataAccess
                    >
    {
    }

User.DataObject.cs

    partial class   User
    {

        public  class   DataObject : BaseDataObject 
        {
            public  string  UserName            { get; set; }
            public  byte[]  PasswordHash        { get; set; }
            public  bool    AccountIsEnabled    { get; set; }
        }
        
    }

User.DataObjectList.cs

    partial class   User
    {

        public  class   DataObjectList : BaseDataObjectList {}

    }

User.IBusiness.cs

    partial class   User
    {

        public  interface   IBusiness : IBaseBusiness {}

    }

User.IDataAccess.cs

    partial class   User
    {

        public  interface   IDataAccess : IBaseDataAccess {}

    }

And the files would be organized in the solution explorer as follows:

    User.cs
    +   User.DataObject.cs
    +   User.DataObjectList.cs
    +   User.IBusiness.cs
    +   User.IDataAccess.cs

The above is a simple example of using an outer class as a generic namespace.  I’ve built “generic namespaces” containing 9 or more type parameters in the past.  Having to keep those type parameters synchronized across the nine types that all needed to know the type parameters was tedious, especially when adding a new parameter.  The use of generic namespaces makes that code far more manageable and readable.

Generics in .Net – Part 1 – The Basics

One of the most powerful features in my opinion about .Net, which really sets it apart from other language frameworks, is its implementation of Generics.  This post will be the first in a multiple part series on the subject.

What is Generics?

So, what is Generics and how do we use it?  Generics or generic programming is a form of coding where type parameters are declared in the signature of classes or methods in place of specific types so that they can be specified later as type arguments.  These type parameters are then used in place of specific types in the implementation of those classes or methods where they are defined.  The following are examples of declared type parameters on a class and on a couple of methods of a normal class:

public  class   GenericClass<T>
{
    private T   someValue;

    public      GenericClass(T someValue) { this.someValue = someValue; }
}

public  class   NormalClass
{

    public  T       GetValue<T>(string key) {...}

    public  void    SetValue<T>(string key, T value) {...}

}

In the above examples, <T> declares that the class or method where it is defined contains zero, one or more references to an as of yet undeclared type T that will be specified later.  Once specified, all instances where T is used will effectively be substituted with the type specified.

Consuming Generic Classes

Perhaps the most common use of Generics in .Net is the use of the Generic classes found within the System.Collections.Generic namespace.  And of these classes, perhaps the most commonly used is the List<T> class.  The List<T> class is the Generic equivalent of the ArrayList class.  It is virtually identical in function as both implement the IList interface and both essentially contains and manage an array of items.  Where the List<T> class shines is in the type safety of ensuring that each instance of the class contains only a homogeneous list of type T or its derivatives.  ArrayList by comparison is homogenous only to the System.Object type.  Since nearly all types derive from System.Object, the ArrayList class does not do much to ensure type safety in most practical cases, especially when one has a specific sub type in mind that they wish to have a homogeneous list of.

So the List<T> class allows you to specify a type argument for parameter T that “constrains” T to a specific type or its derivatives.  When you construct an instance of List<T> at run-time, the type argument that you provide is effectively substituted for T all throughout the implementation of List<T> and a new class type is created.  Any further instances created using the same type argument for T are also instances of this new class type.

One of the main advantages of this form of reuse is in leveraging the same general functionality of a List type across multiple item types without have to repeat write the same general code.  Take the following for example class snippets:

public  class   IntList
{
    ...
    public  int GetValueAtIndexOrDefault(int index, int defaultValue)
    {
        int returnValue;
        if (!this.TryGetValue(index, out returnValue))  returnValue = defaultValue;
        return returnValue;
    }
    ...
}

public  class   StringList
{
    ...
    public  string  GetValueAtIndexOrDefault(int index, string defaultValue)
    {
        string  returnValue;
        if (!this.TryGetValue(index, out returnValue))  returnValue = defaultValue;
        return returnValue;
    }
    ...
}

Consider how the two implementations of GetValueAtIndexOrDefault are virtually identical.  They vary only between the use of the types int and string with regards to the type of items each type of list contains.  Using generics, the above two implementation could be abstracted into the following:

public  class   List<T>
{
    ...
    public  T   GetValueAtIndexOrDefault(int index, T defaultValue)
    {
        T   returnValue;
        if (!this.TryGetValue(index, out returnValue))  returnValue = defaultValue;
        return returnValue;
    }
    ...
}

Notice how the code is now DRYed up.  The type parameter T is now substituted for whatever type arguments we want to specify.  To effectively replace the IntList and StringList class types, we would create instances of List<int> and List<string> respectively.

I plan on posting more on the topic of generics in future posts covering the following concepts:

Generics Inheritance (Parametric Polymorphism)

Understanding Generic Type Parameter Constraints (Bounded Parametric Polymorphism)

Subclass Type Parameter Constraint

Generic Namespaces

 

Strive for one call per user action

So while working on my assigned project at work, I observed some lag populating some drop downs on a screen at I was on. Immediately, I suspected that the drop downs were being loaded from asynchronous calls performed after the screen had loaded. So I fired up Firebug, and sure enough, there they were. 3 calls were being made to the server to get 3 different sets of data to use to populate 3 drop downs. Despite the calls being small and relatively quick (28-49ms each), the effect was still a visible lag long enough that I was able to open one of the drop downs before it had finished being populated.

Now some might say, so what?  Big deal. Why fuss over 49ms? Well, despite the very visible lag, this is just a symptom of a problem that could very well escalate should more drop downs of this nature get added to the screen. Or if this “pattern” is replicated on other more complicated screens it could lead to user frustration and possible inefficient use of resources, especially if the app is stateless and requires authentication.

I’ve followed a guideline, almost a rule, over the past 4 years that I have been building service oriented single page apps, and that is: perform no more than one call per user action. A user initiated action is nothing more than a simple use case. And simple use cases should define a contract that specifies what must be submitted and what must be provided if the preconditions are met. To me, these extra calls are being made because the use case requirements were not fully implemented into one service call. Think of the activity flow or sequence diagram one might draw for a given use case. In its simplest form, the user actor initiates some action with the system and the system responds.  Now, I’m not talking about calls for images or other visual elements.  When I refer to calls, I’m talking about service method invocations.  This by and large is why I take issue with using (non-pragmatic) RESTful services with thier full HATEOS driven implementations as the model for building services for client side applications, but that’s for another post on another day.

Another concern that I have when I see chatty applications that engage in this type of behavior is that it indicates to me that the client application is more familiar with the intimate details of the middleware than it probably should be. This could lead to unintended exposure of pieces of the system that should otherwise be encapsulated. This could expose security concerns and allow new unexpected permutations of workflows and interactions to occur with those functions.

Now it may seem that I am arguing against reusability here, but I assure you that it’s quite the opposite. One could still potentially have those same fine grain functions, but just encapsulate them as implementation detail code that is reused by more coarse grain use case specific methods. In fact, by doing this you will likely find that some of the functionality that resides in the front end code might make more sense being encapsulated in the use case service method code on the server. This code would then be automatically reused if one were to build an alternative front end that leveraged the same service methods.  You might even reduce the size of the amount of data that you are sending over the wire, especially if some of it is going to be filtered out or is just used to help process or relate the data.

For example, let’s say that we organize products by manufacturers.  And let’s say that we have two drop down lists to filter with, one that is a list of manufacturers and the other a list of products.  If we obtain these two lists separately and then drive the contents displayed in the product list based on the selected manufacturer, then we are writing code on the front end to handle that processing.  Additionally, we likely have a manufacturer id or some sort of code, possibly the name, on the product records returned that associate the products with the manufacturers.  An alternative approach to this would have been to send down a dictionary with the manufacturer names as keys and object values containing a list of their products already broken out to each manufacturer.  The processing of the two lists would have been completed on the server and now, an alternative front end would not have to have the same logic repeated.  Consider the following:

[
    "Acme",
    "Good Company",
    ...,
    "Sears"
]
and 
{
    "Anvil":            "Acme",
    "Dynamite":         "Acme",
    "Paint on hole":    "Acme",
    "Bandage":          "Good Company",
    ...,
    "Garden Hose" :     "Sears"
}

vs.

{
    "Acme":         {products: ["Anvil", "Dynamite", "Paint on hole"]},
    "Good Company": {products: ["Bandage"]},
    ...,
    "Sears":        {products: ["Garden Hose"]}
}

The code on the front end can now simply rebind the product list control with the products property of the company object bound to the item selected in the company list control.  With the two separate lists, the front end code would have to scan through the list of products to locate the related products to bind to the products list control when the selected item in the company list is changed.  Consider also that there may be even more complicated rules that might affect those lists such as the user’s location for availability or a combination of other factors and it becomes easier to see why we might want to move that concern to a more centralized location.

I’m interested in other opinions on this topic.  What do you think?  Should we strive to make our front-ends more or less pretty and dumb or should we move more processing logic to the front and have finer grain access to resources from the middle ware?  Please leave your thoughts in the comments below.

 

Atomic Stack Unit Tests

Just wanted to post a quick update regarding the development efforts on Atomic Stack.  I’ve started writing unit tests for the classes developed so far.  Development on the target code with be mostly halted until the tests have been caught up.  After that time, I plan on adhering to TDD practices going forward on the project.

I’ve started with unit tests for the .Net side, but also plan on writing tests for the web/js side.  If anyone has any suggestions on JavaScript unit testing tools to check out, please leave a comment below.  I should be exploring options for testing the web side over the next couple of weeks.  I’ve already planned ahead for unit testing on the web side by abstracting away the HTMLDom by way of the as of yet incomplete baseApplication class.  There is an htmlDomApplication concrete class that provides the wiring to the HTMLDom.  I plan on using an alternative implementation to provide a mock for unit testing.

Subclassable Enums

So today, I found myself needing to map exception types to http status codes for the purpose of looking up which status code to report back from any service endpoint invocation that has been interrupted by an unhandled exception.  Now, I could have simply setup a lazy instantiated static instance of a  Dictionary<Type, HttpStatusCode> somewhere and referred to it.  Or I could have setup a function with a switch statement on an exception parameter’s type and casing on typeof() calls on various exception types to return HttpStatusCodes.  Each of these has their drawbacks though.  The switch statement would only apply in this one case of translating the types.  The dictionary would only provide a map to translate from the exception type to an http status code.  If we ever decided that we wanted to make other decisions or take other actions based on an one of the exception types, we would either have to write more switch statements or expand the value type of the dictionary.

So I discussed with my colleague about using a subclassable enum.  I built a reference implementation using the Subclassable Enum implementation from AtomicStack.  Its class signature looked something like this:

public class ExceptionTypeEnum : SubclassableEnum<ExceptionTypeEnum, Type>

and its constructor looked like the following:

protected ExceptionTypeEnum(Type exceptionType, HttpStatusCode statusCode) : base(exceptionType) {}

This has led me write this post about what exactly a Subclassable Enum is and some of the ways it can be useful.  First, let’s start with some of the problems that the Subclassable Enum helps to solve.

One of the things that some people have wished that they could do in .Net is create enums based on string values.  With the standard enum class, the set of constants defined by the enum must have an underlying type that is an integral type.  If no type is specified, then the underlying type defaults to Int32.  The following is not a valid .Net enum:

public enum Status { active = "active", inactive = "inactive" }

However, using the Atomic.Net StringEnum class enables the following:

public class Status : StringEnum<Status>
{
    public static readonly Status Active   = new Status("active");
    public static readonly Status Inactive = new Status("inactive");

    protected Status(string status) : base(status) {}
}

For example, which can then be used in a parameter definition like in the SetStatus method in the following class:

public class Person
{
    public Status AccountStatus { get; private set; }
    public void SetStatus(Status status){ ... }
}

Another useful feature of subclassable enums is the ability to controllably allow others to extend the list of enum values.  As long as you don’t mark the enum class as sealed, then it is open to extension.  For example, consider the following extensions to the Status enum from above:

public class ExtendedStatus : Status
{
    public static readonly Status Pending = new ExtendedStatus("pending");
    public static readonly Status Locked  = new ExtendedStatus("locked");

    protected ExtendedStatus(string status) : base(status) {}
}

Now we can call the SetStatus method from above with any of the following calls:

public void CallSetStatus()
{
    Person person = new Person();
    // The original Status values work
    person.SetStatus(Status.Active);
    person.SetStatus(Status.Inactive);
    // The original Status values are available via ExtendedStatus too
    person.SetStatus(ExtendedStatus.Active);
    person.SetStatus(ExtendedStatus.Inactive);

    // The new Status values also work
    person.SetStatus(ExtendedStatus.Pending);
    person.SetStatus(ExtendedStatus.Locked);
}

With Subclassable Enums, iterating over the list of registered enums is simple.  For example consider the following iteration over the Status enum from above:

foreach(Status status in Status.AllValues) { ... }

Or you can iterate over their underlying values with the following:

foreach(String status in Status.AllNaturalValues) { ... }

Subclassable enums also benefit from being subclasses like any other.  You can define enums that are abstract and require subclass implementations that override abstract functionality.  For example, consider the following change to the Status enum:

public abstract class Status : StringEnum<Status>
{
    private class ActiveStatus : Status
    {
        public static readonly ActiveStatus instance = new ActiveStatus();
        private ActiveStatus() : ("active"){}

        public override List<Permission> GetAppPermissions() { ... }
    }

    private class InactiveStatus : Status
    {
        public static readonly InactiveStatus instance = new InactiveStatus();
        private InactiveStatus() : ("inactive"){}

        public override List<Permission> GetAppPermissions() { ... }
    }
    public static readonly Status Active = ActiveStatus.instance;
    public static readonly Status Inactive = InactiveStatus.instance;

    protected Status(string status) : base(status) {}

    public abstract List<Permissions> GetAppPermissions();
}

With a classical enum, you very likely would have had to define a helper utility method such as the following:

public static List<Permission> GetStatusPermissions(Status status)
{
    switch(status)
    {
        case Status.Active:
            ...
        case Status.Inactive:
            ...
    }
}

Now consider that you want to make another decision based on a status.  For example, let’s say that you want to optionally log user activity based on status.  With a classical enum you might write a helper utility method like the following:

public void LogActivityMessage(string message, Status status)
{
    switch(status)
    {
        case Status.Active
            // do nothing
            break;
        case Status.Inactive
            ...
    }
}

With the Status subclass of StringEnum you would simply add a new abstract method called LogActivity and call it the following way:

public void TestLogActivity()
{
    Person person = new Person();
    person.SetStatus(Status.Inactive);
    person.AccountStatus.LogActivity("...");
}

Since the LogActivity method is abstract, all enumerated values are now required to at least implement the method.  With the classical enums, the LogActivity utility method may have been defined far from the GetStatusPermissions method.  There is no guarantee that new statuses that are added to the classical enum actually get cases defined for them across the various related switch statements.

And finally, another benefit of subclassable enums is that you are not restricted to a single type of underlying value for the enum entry.  There may be times when an you would like an enum to represent two or more different types of values for a given entry.  Consider the following change to the Status enum for example:

public abstract class Status : StringIntegerEnum<Status>
{
    private class ActiveStatus : Status
    {
        public static readonly ActiveStatus instance = new ActiveStatus();
        private ActiveStatus() : ("active", 1){}

        public override List<Permission> GetAppPermissions() { ... }
    }

    private class InactiveStatus : Status
    {
        public static readonly InactiveStatus instance = new InactiveStatus();
        private InactiveStatus() : ("inactive", 0){}

        public override List<Permission> GetAppPermissions() { ... }
    }
    public static readonly Status Active = ActiveStatus.instance;
    public static readonly Status Inactive = InactiveStatus.instance;

    protected Status(string status, int statusCode) : base(status, statusCode) {}

    public abstract List<Permissions> GetAppPermissions();
}

Now each status presents both a string constant and an integer constant.  The StringIntegerEnum base class provides the ability to obtain both unique lists of underlying values as well as being able to substitute the enum entry for either a string or an integer.  So for example, the status might be stored in the database using its integer value, but may be operated on mostly by its string value in the middleware code.  Just as StringEnums are able to be converted to and from their string values, StringIntegerEnums are able to be converted to and from either their string values or their integer values.  This provides the ability to use subclassable enums as an enumerable mapping structure.

As you can see subclassable enums provide a greater degree of flexibility and versatility than classical enums.  There is a cost of course to subclassing enums.  In order to be derivable, these types are class instances and will not perform the same as classical enums.  But I think we can see that the trade off is likely worth it if you have any of the above requirements.  Additionally, avoiding the proliferation of switch statements based on classical enums all by itself may be justification enough.

Check out the subclassable enum implementation on Github in the AtomicStack project: SubclassableEnum.cs

Atomic Stack Coding Standards

I have started writing the Atomic Stack Coding Standards documentation. This documentation will begin with coding styles and best practices in an attempt to encourage consistency in the code among the various contributors.

Check it out at https://github.com/TyreeJackson/atomic/wiki/Coding-Standards.

Domain Specific Querying Language – Atomic.Net

This is just a quick post to inform that I’ve begun development on what I’m calling the Atomic Domain Specific Querying Language or ADSQL for short.  I’ve setup a TestService class in the AtomicWeb project to serve as a play ground for exploring the language.  Please note that this code will not execute without throwing an exception as there is very little implementation behind the declarations.  In fact, it’s mostly filled with thrown NotImplementedExceptions.  Execution is not a goal at this time.  Right now, I am planning on inviting others to explore the grammar of the language and to either contribute or suggest additional elements to the language.  The language is defined in the Schema folder of the Atomic.Net project.  I will also be adding more elements over the next few days/weeks.  The basic strategy of consuming the language when using an IDE that provides IntelliSense style assistance is to use dot notation to discover the next applicable elements to choose from in order to continue writing a statement.  If typing in a dot results in no suggestions, remove the dot and try a square bracket.  This will indicate that the language requires an input argument.  The input argument may be a lambda expression to branch into the language constructs of another related entity.  Consider the following as an example:

Example of a related query in Atomic.Net's Domain Specific Querying Language

Example of a related query in Atomic.Net’s Domain Specific Querying Language

 

The .CreatedBy element chained from the .And expects a lambda expression in which the single argument provided to the expression will be the appropriate criteria language elements for the .CreatedBy property.  In this case, the argument provided for the createdByWhere parameter will be a User.Criteria language element.  The developer can then formulate the appropriate sub query criteria that applies to the CreatedBy property.

In addition to more language elements, I will also be adding a few more core entities and another project to demonstrate how application developers can add their own entities to the schema.

I’ll discuss the inner workings of how generics are used to construct and route the language elements in another post.  In the meantime, if you’re interested in how the language expressiveness is achieved, please explore the Schema folder in the Atomic.Net project.

See the latest at http://atomicstack.com .

 

Atomic Stack

So, I’ve been thinking about how I wanted to try to jump start my blog for a very long time.  The problem is, that for as much as I like technology, I’ve never really taken to the typed word.  I’ve always preferred to make a phone call rather than write up an email or send a text. For as fast as I can type, it seems that I can never really type fast enough to get my ideas recorded, except generally in perhaps the case of programming which is good, given that I’m a programmer.  But when it comes to free-flowing thoughts like those generally relayed via speech, I would definitely prefer to dictate than to actually type.  So, I’m going to attempt to use the Voice Memos iOS application as a way to stage the content for my blog.

Recently I decided to start an open source project, based on some opinions I had received at the St. Louis Days of .Net which echoed similar sentiments from the previous year’s conference.  You see, over the years I’ve been fortunate to have been tasked with solving some of the most difficult challenges faced by the various teams that I have been a part of.  I’ve also been fortunate to have worked with some very talented and skilled individuals on those teams.  Some I collaborated with and others I was literally schooled by.  I’ve somehow managed to hold on to the practices that have proven useful and advantageous in the various projects that I’ve worked on and assimilated them as recurring patterns.  I’ve described some of these patterns to certain individuals over the last few years, and nearly every time, I’ve been asked if any of it was embodied in an open source implementation.  The unfortunate answer has always been nothing that I’m involved with and nothing that I was aware of.

So the purpose of this open source project is to embody the set of tools that I will be implementing and leveraging during the course of the development of a personal closed source project of my own.  As such, requirements will flow from my personal project to the tool-set project.  The tool-set will be comprised of a stack of technologies that I will leverage to build an n-tier web application.  These technologies will be new implementations based upon the patterns that I have successfully leveraged over the years.  These will include things in the following list, which I plan to go into further detail of in future posts:

  • Configurable tier implementations across 8 tiers (data storage, data access/io, business enforcement, application services, service hosting, client side server access/io, client side controllers and client side views)
  • Three independent storage/transport schemas (data storage, entity model, and use case schemas)
  • Entity Model in memory indexing
  • Entity Model domain specific querying language built as a fluent api on top of .Net classes
  • Singleton/Multiton supporting services with lifetimes managed by an Abstract Factory implementation
  • Vertically integrated independent entity tiers relationally bound in the business tier with optimized distributed data access/io execution
  • Expansive configuration
  • Web server host abstraction (with IIS integration implementation, future implementations may include integration on top of OWIN)
  • Built in user account management and security authentication/authorization (including access challenge/negotiation)
  • Built in/expandable service hosting options (default web service implementation with multiple/expandable content negotiation options)
  • Subclassable Enums (and conversely enumerated subclasses, an excellent alternative to using switch statements)
  • Additional supporting data types
  • Advanced .net generics tricks (including subclass constraints/contexts and generic namespaces via nested classes)
  • General object based data storage services for supporting the practice of high fidelity functional prototyping
  • Interchangeable client side server proxies (supporting functional prototyping via an elaborate proxy implementation simulating future remote services)
  • Classical inheritance implementation on top of ECMAScript 5/JavaScript 1.8.5 with base class method call dispatching, public and protected access modifiers, instance and static scopes with constructors
  • Pure client side MVC solution implemented using JSON/HTML/JavaScript
  • and of course more…

The name of this set of tools is Atomic Stack.  The server side tiers (Atomic.Net) are being implemented in .Net using C# due its amazing generics support.  The client side tiers (AtomicWeb/AtomicJS) are being built upon HTML5/ECMAScript 5/CSS 3.  The project goals will including adherence to development practices including:

  • Separation of Concerns (including Unobtrusive JavaScript)
  • Clean Coding Principles
  • Tier/Down Design and Development (with wide client side development cycles with narrow vertical server side development sprints)

In addition I will be looking at employing additional practices not currently in use including the following:

  • Design by Contract (at least for application hosted services)
  • Test Driven Development

I’m sure that if anyone actually comes across this blog, that some may point out that there are a ton of frameworks and libraries out there.  They may question, do we really need yet another framework or library, much less a stack of them?  Frankly, I’m not sure.  I do know however that I have ideas and I would like to contribute those ideas in a tangible way to the community.  And due to my past experience, there is a certain degree of independent yet cohesiveness among these ideas which therefore are compelling me to attempt to start from scratch and create these new tools with minimal constraints solely upon the raw platforms that they are to be built upon (.Net, JavasScript, HTML, CSS, etc.).  Ofcourse I will very likely be incorporating additional dependencies upon things that are already well written and tested (like Mike Woodring’s DevelopMentor ThreadPool, HtmlAgilityPack and jQuery).  But it is very likely that I will avoid some tools whose implementations I find lacking (for example the MS Entity Framework).

Anyway, I droned on long enough and I’m not quite sure how to end this post.  If you are interested in checking out the project, please visit http://atomicstack.com.

 

Mobile Webkit HTML/Javascript timing issue

Here is an interesting bug in mobile safari that I ran across at the beginning of this year. Start a new web page with only the HTML and body tags. In the body tag add an onload attribute. Inside of that, set the background color to some color other than white. Then display an alert. On most browsers opening this page will display a screen with whatever color you chose and an alert prompt. In mobile safari however, the screen background color will not change until after you dismiss the prompt. I had submitted a bug for this earlier this year. It appears that this issue is still not fixed.