The Stochastic Game

Ramblings of General Geekery

Warning

I was just adding CPPUnit to one of my projects. It ships with old VC6 workspaces, so you need to update them to the latest Visual Studio. While updating them to VS2005, I got a few of those little dialogs:

So? Yes or No? Choose wisely!


Got burned by Vista

These, days, I’m back doing PHP stuff at home, and therefore had to install Apache. Of course, I could do PHP under IIS7, which would be easier, what with the nice administration interface and all, but I need to recreate the same environment as my hosting solution, complete with .htaccess and all that stuff.

One of the first thing you need to do in this situation is to add an alias to your websites because they probably aren’t located in the default documents root. So I did that, I opened httpd.conf in Notepad, edited it, saved it, restarted Apache, and off I went to write some PHP code.

A few days later, I want to move my websites to another directory, and add a few other ones. This time, I open httpd.conf in SciTE because I remember it’s got some nice syntax highlighting for this. I edit it, save it, restart Apache, and… mmmh, Apache doesn’t look like it noticed the changes. It’s still using the old locations, and none of the new ones.

After a couple hours of debugging, reading the logs and cursing at Apache, I realize who’s to blame: Vista. I should have known… I mean, everybody’s blaming it for everything, so I should have followed the crowd from the start. As it always is in these situations, the basic problem was that I was not editing the correct file. This is one of the golden rules of troubleshooting:

If a program looks like it’s not picking up the changes in a file you just edited, you most likely edited the wrong file.

This time, though, I really didn’t think there was a problem. After all, I had an explorer window open on Apache‘s conf directory, and I was either drag and dropping httpd.conf on SciTE, right-cliking and using “Open in SciTE“, or going File > Open directly in SciTE.

When I entered “full paranoid mode” (you know, that mode you get into when nothing works and you feel you don’t trust anything anymore, even the magnetic fields and gravity), I noticed an unknown button in explorer:

Do you see it? It’s the “Compatibility Files” button on the tool bar. “What the fuck is that?“, I thought, and I clicked on it. It led me to this:

I’ve selected the address bar so you can see the physical location of this thing.

All my cursing immediately redirected to Vista.

Scott Hanselman already got burned by the same thing and he explains it all, so go there if you want to know more about this whole thing… but basically, it’s a backward compatibility system that allows “legacy” applications that don’t explicitely require privilege elevation to still “work” by being redirected to a virtual storage when they perform file system operations on items the current user doesn’t have access to. In plain language, it allows crappy applications that assume you’re an Administrator to work correctly even though you’re not.

Now, the problem is that SciTE is not a crappy application. It’s just a text editor. When I opened the file, it should have seen I didn’t have access to that file, therefore treating it like any other read-only file: display a “Read Only” warning in the title bar, prevent me from editing or saving it, etc. But no, because Vista told it otherwise. Vista said “yeah, it’s okay, this file is totally writeable, no problem” while hiding the fact that it was redirecting that process to another place, so SciTE went along with it.

I understand that backward compatibility is a huge thing in the enterprise, and is actually one of the main reasons of Windows‘ commercial success, but it obviously fails in this situation. It’s wrong to tell a text editor that a configuration file is writeable when it’s not. It probably works in lots of other cases, allowing lots of big, expensive legacy programs to run correctly, but it fails here.

Scott suggests that the UI should do a better job of, well, not letting the user waste time figuring out where his files went, but I think the problem is deeper. I think this kind of “feature” should be a choice in the first place, not something that’s forced upon the user. The assumption that users are Administrators was in the long run a mistake, and there’s no way around a mistake without getting bitten in the ass at some point. There should be a way to just face it, and risk breaking programs. This is not desirable in most business places, but it may be desirable on lambda users’ machines. Yes, it would break stuff, but Vista already had a hard time with it already anyway, and Apple managed to go through a similarly harsh transitions before. We should be able to install Windows without 20 years of Side by Side libraries and “magical” weirdnesses like this one if we want to.


A variation on the auto-completing WPF TextBox

In this world of web 2.0 tagging madness, any good (or at least shiny enough) application needs a way to apply tags in a user-friendly way, namely with auto-completing text boxes. For example, the awesome Remember The Milk application has this:

What if we want a similar text box in WPF that would display a drop-down list of all the existing tags that match what the user is currently typing? The answer is, unfortunately, that we need to do it ourselves… but don’t be afraid, we’ll be playing around with cool things!

Auto-complete WPF text boxes have their fair share of articles on the web, but I find that Jason Kemp is the only one who wrote something interesting on the subject with his 4 part post (part 1, part 2, part 3, part 4). The main problem, however, is that all those articles deal with only auto-completing the full text of the text box, not each word as it is the case for a tag list. Let’s try to build our own solution, then and at the same time come up with a variation of Jason’s implementation.

The thing I liked about Jason’s solution was that he was using the Attached Property Pattern, which is one of the very powerful concepts that WPF and Silverlight offer. This pattern has already been discussed by Nikhil Kothari, John Gossman, and, err, BenCon (what’s his real name anyway?). What I didn’t like with Jason’s solution, though, was that in the end he was hijacking the control template of the text box to replace it with one that contained the popup to be used for showing the auto-complete suggestions. Since it’s common for designers to already replace a control template to re-look a control, I went with a different approach, which is to specify the “external” popup you want to use.

It goes like this:

    <Grid
        Margin="10">
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="Auto" />
            <ColumnDefinition Width="*" />
        </Grid.ColumnDefinitions>
        <TextBlock>Tags: </TextBlock>
        <TextBox
            Grid.Column="1"
            Name="TextTags"
             local:TextBoxAutoComplete.WordAutoCompleteSource="{Binding Source={StaticResource Tags}}"
            local:TextBoxAutoComplete.WordAutoCompletePopup="{Binding ElementName=PopupTags}" />
        <Popup
            Name="PopupTags"
            IsOpen="False"
            Placement="Bottom"
            PlacementTarget="{Binding ElementName=TextTags}">
            <Grid>
                <ListBox Name="PART_WordsHost" />
            </Grid>
        </Popup>
    </Grid>

In this example, we have a text box called “TextTags” which has 2 attached properties: one that defines the source for all the suggestions, and one that defines the popup that will show the matching suggestions. The tricky part is that you need to know where to “feed” those matching suggestions. There’s a good chance there’s going to be some kind of list control in the popup where the suggestions should be inserted, but how to get to it?

It’s somewhat of a convention in WPF that if you need some “handshaking” between the look and the logic of a control, you should use named controls with a “PART_” prefix. This is what happens here, with the ListBox being called “PART_WordsHost”. This decouples the logic from the appearance, letting you arrange the popup as you want, or even use a ComboBox or a ListView instead of a ListBox.

You can read more about this “PART_” business in books like “WPF Unleashed” or “Programming WPF” (I haven’t found a good article on the web yet). Also, note that this convention is usually used between the logic of a control and its own control template, which is not exactly the case here, but it works all the same.

Now let’s look at the interesting bits in the code. This is implemented in a static class called “TextBoxAutoComplete”.

First, we need to register the attached dependency properties.

public static readonly DependencyProperty WordAutoCompleteSourceProperty;
public static readonly DependencyProperty WordAutoCompleteSeparatorsProperty;
public static readonly DependencyProperty WordAutoCompletePopupProperty;
static TextBoxAutoComplete()
{
      var metadata = new FrameworkPropertyMetadata(OnWordAutoCompleteSourceChanged);
      WordAutoCompleteSourceProperty = DependencyProperty.RegisterAttached("WordAutoCompleteSource", typeof(IEnumerable), typeof(TextBoxAutoComplete), metadata);

      metadata = new FrameworkPropertyMetadata(",;");
      WordAutoCompleteSeparatorsProperty = DependencyProperty.RegisterAttached("WordAutoCompleteSeparators", typeof(string), typeof(TextBoxAutoComplete), metadata);

      metadata = new FrameworkPropertyMetadata(OnWordAutoCompletePopupChanged);
      WordAutoCompletePopupProperty = DependencyProperty.RegisterAttached("WordAutoCompletePopup", typeof(Popup), typeof(TextBoxAutoComplete), metadata);
}
public static void SetWordAutoCompleteSource(TextBox element, IEnumerable value)
{
    element.SetValue(WordAutoCompleteSourceProperty, value);
}

public static IEnumerable GetWordAutoCompleteSource(TextBox element)
{
    return (IEnumerable)element.GetValue(WordAutoCompleteSourceProperty);
}

public static void SetWordAutoCompleteSeparators(TextBox element, string value)
{
    element.SetValue(WordAutoCompleteSeparatorsProperty, value);
}

public static string GetWordAutoCompleteSeparators(TextBox element)
{
    return (string)element.GetValue(WordAutoCompleteSeparatorsProperty);
}

public static void SetWordAutoCompletePopup(TextBox element, Popup value)
{
    element.SetValue(WordAutoCompletePopupProperty, value);
}

public static Popup GetWordAutoCompletePopup(TextBox element)
{
    return (Popup)element.GetValue(WordAutoCompletePopupProperty);
}

You can check MSDN for some explanation on this. Don’t forget the static getters and setters, because without them, the XAML syntax won’t work.

Here are the “dependency property changed” callbacks:

private static void OnWordAutoCompleteSourceChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
    TextBox textBox = (TextBox)d;
    SetWordsHostSourceAndHookupEvents(textBox, (IEnumerable)e.NewValue, GetWordAutoCompletePopup(textBox));
}

private static void OnWordAutoCompletePopupChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
    TextBox textBox = (TextBox)d;
    SetWordsHostSourceAndHookupEvents(textBox, GetWordAutoCompleteSource(textBox), (Popup)e.NewValue);
}

private static void SetWordsHostSourceAndHookupEvents(TextBox textBox, IEnumerable source, Popup popup)
{
    if (source != null && popup != null)
    {
        //TODO: make sure we do this only this once, in case for some reason somebody re-sets one of the attached properties.
        textBox.PreviewKeyDown += new KeyEventHandler(TextBox_PreviewKeyDown);
        textBox.SelectionChanged += new RoutedEventHandler(TextBox_SelectionChanged);
        textBox.TextChanged += new TextChangedEventHandler(TextBox_TextChanged);

        Selector wordsHost = popup.FindName("PART_WordsHost") as Selector;
        if (wordsHost == null)
            throw new InvalidOperationException("Can't find the PART_WordsHost element in the auto-complete popup control.");
        wordsHost.IsSynchronizedWithCurrentItem = true;
        wordsHost.ItemsSource = source;
        textBox.SetValue(WordAutoCompleteWordsHostPropertyKey, wordsHost);
    }
}

What this code does is that whenever a text box has been assigned both a source and a popup for the auto-complete feature, we hook up some event handlers on the text box to listen for user input. We also look for the selector control that will host the auto-complete suggestions (in our case, the ListBox), and set its data source to be our own source for those suggestions.

Obviously, this code needs some more work, like making sure we’re not re-registering the same events twice (which shouldn’t happen if you only set this in XAML). Also, as I mentioned earlier, we’re only looking for a control named “PART_WordsHost”, and throw an exception if we don’t find it, or if it’s not a selector control. We could be a bit nicer and try to explore the popup’s tree, looking for some selector control, but this exception will be raised the first time the user tests his code anyway, and is pretty self-explanatory, so I think it’s not a big deal for now.

Now that we’ve hooked up some event handlers to the text box, we can start doing the work. I won’t post the whole code here, as it’s mostly boring logic code that figures out what the user is doing and shows or hides the popup accordingly. However, there’s a few interesting things to discuss about this code.

First, we have a dependency property set up that contains the separator characters we should use to split and parse the text box’s text. Since our main use case is tags, the default separators are “,” and “;”. Use the TextBox’s CaretIndex, SelectionStart and SelectionLength to figure out where the user is currently typing things.

Second, we can take advantage of the filtering features of WPF’s data binding (more specifically the ICollectionView). When we have figured out what tag name the user is currently typing, we can set the filtering delegate on the words host items collection:

private static void TextBox_TextChanged(object sender, TextChangedEventArgs e)
{
    TextBox textBox = (TextBox)sender;
    Popup popup = GetWordAutoCompletePopup(textBox);
    string currentWord;// (SKIPPED CODE: get the currently typed word)
    if (currentWord.Length > 0)
    {
        // Filter all the auto-complete suggestions with what the user is currently typing.
        Selector wordsHost = (Selector)textBox.GetValue(WordAutoCompleteWordsHostProperty);
        wordsHost.Items.Filter = o => GetTextSearchText(wordsHost, o).StartsWith(currentWord, StringComparison.CurrentCultureIgnoreCase);
        // (MORE SKIPPED CODE) 
    }
    // (MORE SKIPPED CODE)
}

This leads us to another interesting point… see that “GetTextSearchText” method? It’s supposed to return the string representation of any item in the auto-complete suggestion list. Because we’re doing all this to handle tags, we know each item will actually be a string, so it’s easy. But it’s not so easy if you’re handling custom objects, and you want some kind of auto-complete text box for writing lists of such objects.

In this case, you would probably get the TypeConverter of each object and use that to get a string representation. After all, you need some bi-directional conversion in order to later parse the list the user typed in the text box! The TypeConverter is well suited to this task, but the whole thing made me look at the TextSearch class.

TextSearch is a class that defines attached properties that allow you to control the text representation of objects in search and auto-complete scenarios. The typical example of this is for editable ComboBoxes, where the user can type something in the editable part, and the system automatically pre-selects the correct entry in the ComboBox’s drop-down menu (actually, simple auto-complete text boxes can be implemented with a ComboBox and a few lines of XAML!). In this case, each entry in the ComboBox can have a TextSearch attached property that provides the string they can be matched against. Looking at the ComboBox’s code shows that it calls methods on TextSearch like “FindMatchingString(Object)” or “GetPrimaryText(Object)”. Those methods are nice because they handle all kinds of other cases, like when the object is an XmlNode, or is a FrameworkElement. The problem is that those methods are internal, so we can’t use them! Some people don’t want to rewrite that logic code and, hoping Microsoft will one day make those methods public, use awful ways in the meantime to work around the problem. I can’t really blame them, especially when they feel bad already.

Luckily, in our case, we’re only manipulating strings so all is fine, but you may want to rewrite that “GetTextSearchText” method to your liking.

The last point I wanted to discuss is that this kind of attached behaviour sometimes needs to stores some custom data on its target. In our case, we may want to “cache” the control that acts as the words host because we don’t want to look for “PART_WordsHost” every time the user types something. If you look at that last snippet of code, you can see I stored a reference to that control in a dependency property called “WordAutoCompleteWordsHostProperty”.

This dependency property is a read-only attached property, and those are pretty useful to store things only you should be able to set. I use them here to cache the words host control, but also to set a variable that tells us whether a selection change in the text box was caused by text input, or caused by the user navigating left and right (using the left and right keys, or clicking on the text with the mouse… this has some effect on whether to show or hide the popup).

Well, I believe that sums it up. The code as an attachment to this post, with a demo application that uses all the names of the “Key” enumeration as the source for auto-complete suggestions. See the screenshot below:

Note that this demo code won’t be maintained. To get my latest version of it, you can download the source code of Milkify and look for TextBoxAutoComplete.cs. For example, I plan to try and get the popup to show up just below the currently typed tag name, instead of aligned at the bottom left of the text box… but that’s all details. You have everything you need to write your own!

TextBoxAutoCompleteDemo.zip (10.78 kb)


WPF application patterns continued: convention over configuration

In my previous post on WPF application patterns, I went over a few of the most common patterns available, and gave some pointers on how to implement them in WPF. I mentioned how you could use a dependency injection framework to decouple the implementation of your views and controllers from each other. However, the problem when using dependency injection out of the box is that you need an interface to bind to.

Soon, you end up with this:

And this kind of initialization code (using Ninject1, but it’s straightforward to understand):

class ApplicationModule : StandardModule
{
public override void Load()
{
Bind<IAccountController>().To<AccountController>();
Bind<IAddProductController>().To<AddProductController>();
Bind<ICategoryController>().To<CategoryController>();
Bind<ICategoryListController>().To<CategoryListController>();
Bind<IEditProductController>().To<EditProductController>();
Bind<IMainWindowController>().To<MainWindowController>();
Bind<IProductController>().To<ProductController>();
Bind<IProductListController>().To<ProductListController>();
Bind<ISettingsController>().To<SettingsController>();
Bind<ISummaryController>().To<SummaryController>();
}
}

You’re flooded with interfaces that will be, most of the time, empty (because, remember, you’re using WPF features such as data binding and commanding to control your view, thus removing the need for much specific wiring code beside what will be in the base IView and IController interfaces). Remember that the example above only has 10 views in total, which means it’s a pretty damn small application, and you have to double the initialization code because I just wrote it for the controllers (you need the same thing for the views).

It’s tedious, and quite repetitive, and you know how lazy developers are.

Ruby On Rails made popular the “convention over configuration” paradigm, which previously didn’t have much spotlight. We can get on that bandwagon to remove some of our work.

More specifically, because we’re nice and obedient developers, we know that our controllers and views will be called “FooView” and “BarController“. So instead of getting a view or a controller based on its interface, we could get it based on its “name” (where the name here is “Foo” or “Bar“).

We’re therefore trading this code:

class AccountController : IAccountController
{
public void Execute()
{
IAccountView view = KernelContainer.Kernel.Get<IAccountView>();
}
}

For this code:

class AccountController : IAccountController
{
public void Execute()
{
IAccountView view = (IAccountView)viewFactory.Create("Account");
}
}

By the way, notice how most MVC/MVP frameworks for web applications already work like this.

Anyway, this is not too bad, given that we’re doing this so that we can remove most of our interfaces, and change the initialization code to something much simpler and scalable. In our case, the initialization code could be as follows… First, bind all classes called “SomethingController” or “SomethingView” to themselves:

class ApplicationModule : StandardModule
{
public override void Load()
{
foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies())
{
foreach (Type type in assembly.GetTypes())
{
if (typeof(IView).IsAssignableFrom(type) && type.Name.EndsWith("View"))
Bind(type).ToSelf().Using<TransientBehavior>();
}
}
}
}

Then, implement the IViewFactory by building the full type name of the requested view, and then asking the IOC container to build it for you (as opposed to using Activator.CreateInstance, so that any dependencies are correctly injected):

class ViewFactory : IViewFactory
{
#region IViewFactory Members
public IView Create(string viewName)
{
Type viewType = GetViewType(viewName);
IView view = (IView)KernelContainer.Kernel.Get(viewType);
return view;
}
#endregion
}

This code will stay pretty much the same as the application grows, unlike the other code, and you can switch modules for your unit tests. Of course, you may want to have a more solid test to know what type should be considered a view, or have better control over which assemblies are reflected, etc. This is just example code. Remember also that this is only executed once at startup, so unless you have severe startup performance requirements, the cost shouldn’t be noticeable. Otherwise, you could always generate the code during build and compile that instead.

Now, I know, I know, my original code is not dependency injection, because I’m fetching the IAccountView myself from the IOC container (so this is really a Service Locator pattern). You would preferably have this instead:

class AccountController : IAccountController
{
private IAccountView mView;
public AccountController(IAccountView view)
{
mView = view;
}
public void Execute()
{
// Do something with the view.
}
}

That is, get the view injected into the controller.

However, you can’t do that half the time. The goal of a controller is not only to process user actions on a view, it’s also to choose what view to show in the first place (which means you don’t always have a 1 on 1 relationship between controllers and views). You may therefore have some logic code in there that, depending on the context (is the user logged in, does he belong to the correct group, has he been banned, etc.), will show different views (redirect to the login view, display the regular user or administrator account view, display an error message, etc.). You don’t want all possible views injected in your controller, because any of those views could have a costly constructor itself. So you really need to fetch the view yourself programmatically.

What you would get injected, on the other hand, is the view factory that I use in my modified code:

class AccountController : IAccountController
{
public AccountController(IViewFactory viewFactory)
{
IAccountView view = (IAccountView)viewFactory.Create("Account");
}
}

And as I mentioned earlier, it’s probable that your IAccountView is empty, so you can actually get away with just using IView, thus removing the cast.

Now we have a nice way to manage our views and controllers, and how to make them relate to each other. In future posts, we’ll look at other implementation details of WPF applications, such as giving controllers some context and handling user actions.

1 Because it’s a cool IOC framework, with a super cool name and logo!


Announcing IronCow

One of the projects I’m working on at home, IronCow, is now in a state of "somewhat usable alpha". It’s a .NET API that allows developers to use Remember The Milk, the to-do list website, from any .NET language. IronCow is composed of two layers.

One layer is a "dumb" wrapper around the public REST API, in way very similar to what FlickrNet does with Flickr‘s own REST API. The second layer is a "rich" object model that exposes your tasks (and other data) through classes that stay in sync with the server as you manipulate them. This was done mainly with WPF applications in mind, enabling things like data binding and other reflection-based mechanisms.

IronCow is available on CodePlex.

A sample application that ships with IronCow is IronCow for Powershell. This is a Powershell snap-in that defines a few cmdlets that will let you manage your tasks through the command line! (if you don’t see the point, or think it’s lame, you’re obviously not enough of a geek) You’ll get to type command lines such as:

Get-RtmTask "dueBefore:tomorrow" | Sort Priority,Estimate | Export-Csv $HomeTasksToDoToday.csv

If that’s not cool, I don’t know what is.

This sample application is still a bit rough around the edges (understand: no installer – you need to compile and register the snap-in yourself, and no help file – you need to look at the code to know what parameters the cmdlets take!), but I’ll focus on that once IronCow itself is stabilized. Stay tuned!


WPF application patterns

There's quite a few articles about using the MVC, MVP, MVVM and other MSomethingSomething patterns with WPF applications out there, but I think there's still quite a few things to say on the subject before we can implement all those wonderful acronyms from A to Z without too much worry.

The goal of each of those patterns is to achieve separation of concerns for code reuse, scalability, extensibility, and testability, along with that warm fuzzy feeling you get when you look at your code and you think "wow, this is nice". However, which one should I use for this or that application? What are the pros and cons of each? How do they apply to WPF apps, and how do I implement them?

The View and its Model

Let's start with what we already have: windows and controls. In all those acronymed patterns, this is called the "View".

WPF gives us something with its data binding engine:

<Button Content="{Binding Name}" />

Whatever you bind to the Button needs to have a "Name" property, but there's no requirement on the types or interfaces of either the bound object, or the "Name" property. Of course, in more complex scenarios, things get, well, more complex, but only regarding the type of the properties. In this case, there's a good chance "Name" is assumed to be a System.String, but the object that has a Name can be anything.

This is pretty similar to duck-typing, and gives us a level of abstraction that allows us to switch data bound objects during testing, or even within the application itself, giving to the same UI element objects that behave differently, but look similar. The problem is that there's no way to really enforce the "public aspect" of the data bound object other than by launching the application and looking at the debugging output for some error and warning messages. This is like working with template classes in C++ where you can't know whether you can pass your class to a templated method or class until you've compiled your code and looked at the error messages (or absence thereof). I don't think it's a super nice way of working. IMHO it's better in, say, .NET, where you have constraints on generic classes, which effectively makes you lose the "duck-typing" aspect to C++ templates, but lets you worry less (and gives you all kinds of IDE features to improve your productivity). In our UI application, it's therefore probably better to "enforce" what we expect for our data bound objects by declaring an interface, unless you like looking at the debug output messages.

Here, whatever we bind to the view needs to have a name. The view interface probably has a ViewData property, or SetViewData method, that is of type INamed.

public interface INamed
{
string Name { get; }
}

Those data bound objects are collectively know as the "Model", but this is where some patterns diverge.

If your model is simple enough you won't need to go through an interface. The interface is really needed when unit testing a view can't be done with the actual model, or when the model has dependencies you want to hide. For example, if a model object generates queries to a database, or has other such side effects, you'll have to mock it during your tests. This means that either the object itself is mockable somewhere (disable database queries, or mock the database itself), or you need some wrapper that will expose it indirectly to the view. The wrapper can just be an interface that the model implements, or it can be a proxy object. On the other hand, if the object is "dumb", or has no undesired dependencies, you can use it even during the unit tests.

Mocking the inside of a complex model can lead to difficult to setup tests, where you need to mock the database and the logger and the netwok and whatnot, so it's only useful when you know it won't get out of hand. Some people solve the problem by making the view so empty (in terms of logic code) that there's no point in testing it. The ideal case in WPF is when the view is all XAML markup, with no code behind.

If you're using a proxy or wrapping object around the model, this is known as either the "Presentation Model", or the "View Model". It's a model that you craft specifically for the view, as opposed to the "business" model. In our case, it's frequently an object that implements all kinds of WPF-friendly stuff, like the INotifyPropertyChanged and INotifyCollectionChanged interfaces, or exposes properties that make it easier for WPF designer to write nice templates and styles. For example, you might have the following model:

public enum ModelFlags
{
SomeFlag,
SomeOtherFlag,
YetAnotherFlag
}
public class Model
{
public ModelFlags Flags { get; set; }
}

The ModelProxy class, that you pass to the view, might have the following properties:

public class ModelProxy : INotifyPropertyChanged
{
public bool HasSomeFlag { get; }
public bool HasSomeOtherFlag { get; }
public bool HasYetAnotherFlag { get; }
public ModelProxy(Model model)
{
// wrap the model...
}
}

Those 3 boolean properties will make it easy to dynamically change the look of your UI based on the model's state, because they're easy to hook up to style triggers in XAML markup. If you manage to implement INotifyPropertyChanged for those properties, that's even better, because the UI will change at runtime as the model gets updated.

Controllers and Presenters

Now that we have our views, and the model they manipulate and expose (whether it's the actual model, or some view/presentation model), we need someone to implement the application logic, respond to user input, and take care of the flow that leads the user from view to view. Depending on the pattern you're using, those classes are "Controllers" or "Presenters". Controllers typically handle business logic, whereas presenters handle user input too. I'll only refer to Controllers here to prevent awkward wording.

In WPF there's a few tools that fit quite well with those patterns. The first one is the command mechanism.

<Button Content="{Binding Name}" Command="{x:Static ApplicationCommands.Properties}" />

In this case, our button now triggers the "Properties" command. You can also pass a CommandParameter to the command when the button is invoked. You would typically use the data bound object as the parameter, so use that Binding markup extension there too.

The WPF commands have been quite extensively documented and discussed, so there's no need to explain how they work here. However, there's a few things worth noting.

First, the controller is going to be a class that has a reference to the view it's responsible for, probably through an interface so you can test it with a mocked view. This means the controller is not in WPF's logical or visual trees, so if you're using commands that bubble or tunnel up or down those trees (which is the case for the commonly used RoutedCommand and RoutedUICommand class), the controller can't be notified. There are mainly 2 ways to solve this problem:

  • The view interface exposes a CommandBindings property through which the controller can add its own command handlers. This is really just exposing the already existing CommandBindings property that you can find on almost all WPF controls. This means the view is the one that will really catch the command invocation, but the delegates you're registering will be the controller's. The advantage is that it's super ultra easy to setup. The disadvantage is that only a view's controller can listen to a given command emitted from that view. You could get around this by having a mechanism to get access to other views, but it can become quite messy for highly modular UIs.
  • The other solution is to implement your own kind of commands, as WPF really only knows about the ICommand interface, which was a pretty good move from the guys who designed it. Take a look at the recently released Composite Application Guidance for WPF, where they have commands specifically written for being handled by non UI classes in a modular application.

Second, you might ask the seemingly stupid question: "Who came first? The view or the controller?".

The problem is that the view and the controller both have a reference to each other. The view needs to give the controller some context for it to figure out what to do, and then needs to drive the view into doing what it decided. Using custom written commands in simple scenarios will probably let you avoid keeping a reference in one or the other, but more often than not, you'll need a cyclic reference. This has some impact when using inversion of control or dependency injection frameworks, because this means the view will take the controller as a constructor parameter, or the other way around. This means you "pull" one out of your factory, and the other one gets "pulled" in order to be passed as a dependency.

So far, I haven't found any compelling argument for one or the other. What I do is pass the controller as a constructor parameter to the view because, in my mind, the view comes first. Whether a controller gets attached or not just means you get a functional or completely dead UI. Also, I find that I reuse views with different controllers more often than the other way around, so that's leads to an easier setup of my dependency injection container. You can find arguments for the other way, too. For example, if you consider that the ideal case for a view is to be only plain XAML markup, then the view shouldn't have by default any reference to the controller. This means in turn that the view should be passed as a constructor parameter to the controller.

Some people make the controller also be the view model. They set the view's DataContext to be the instance of the controller responsible for that view. Of course, this is done through an interface. Note that you can play around, here, with 2 interfaces (one for the controller "role" and one for the view model "role") that the controller class both implements. This will be easier to mock for unit tests, and to set up the dependency injection container in the application. Anyway, this can have some advantages, but it prevents reuse of view models, which happens a lot whether it's because you pass the same type to different views, or because you use compositon or inheritance between view model classes.

So what, then?

Well, I hope I gave a good overview of the options that are available when designing a WPF application. It really starts with the UI itself, and commands and data binding, because that's some good stuff you get for free. Design your application around those concepts, and depending on the model you want to manipulate (legacy business model, model you're going to build along with the application, model that comes from a library you don't own, etc.), adapt yourself by either using the PresenterModel or ViewModel patterns as needed. Keep an eye out for testability, and extensibility.

We'll get to more concrete examples in future posts.


Almost everything you need to know about XAML serialization (part 2)

In part one of this little simple series, we saw how to use XAML as a file format for our own custom types. However, we wanted to reduce the verbosity of XML for specifying objects with only a few properties. This can be done with MarkupExtensions.

We saw that the XAML serializer is asking if our types can be converted to the MarkupExtension class. Let’s give our CustomizableEngine class the ability to do that:

    [TypeConverter(typeof(CustomizableEngineTypeConverter))]
    public class CustomizableEngine : Engine

The implementation of the type converter is a pretty simple skeleton:

public class CustomizableEngineTypeConverter : TypeConverter
{
    public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
    {
        if (destinationType == typeof(MarkupExtension))
            return true;
        return base.CanConvertTo(context, destinationType);
    }

    public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)
    {
        if (destinationType == typeof(MarkupExtension))
        {
            throw new NotImplementedException();
        }
        return base.ConvertTo(context, culture, value, destinationType);
    }
    
    public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
    {
        if (sourceType == typeof(MarkupExtension))
            return true;
        return base.CanConvertFrom(context, sourceType);
    }

    public override object ConvertFrom(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)
    {
        if (value is CustomizableEngineExtension)
        {
            throw new NotImplementedException();
        }
        return base.ConvertFrom(context, culture, value);
    }
}

Now we just need to know what’s a MarkupExtension and how to build one. Since this is well explained in MSDN there’s no need to detail it here, but to summarize, it’s a class that, by convention, ends in "Extension", and whose each instance can provide a value of another type when asked for it. The XAML serializer will create and initialize a MarkupExtension based on some standard curly braces based syntax.

So it really all boils down to creating our own MarkupExtension class for the CustomizableEngine class:

public class CustomizableEngineExtension : MarkupExtension
{
    public int Power { get; set; }

    public override object ProvideValue(IServiceProvider serviceProvider)
    {
        return new CustomizableEngine() { PowerSetting = Power };
    }
}

Now we can fix the type converter class:

public class CustomizableEngineTypeConverter : TypeConverter
{
    public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
    {
        if (destinationType == typeof(MarkupExtension))
            return true;
        return base.CanConvertTo(context, destinationType);
    }

    public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)
    {
        if (destinationType == typeof(MarkupExtension))
        {
            CustomizableEngine engine = (CustomizableEngine)value;
            CustomizableEngineExtension extension = new CustomizableEngineExtension();
            return extension;
        }
        return base.ConvertTo(context, culture, value, destinationType);
    }
    
    public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
    {
        if (sourceType == typeof(MarkupExtension))
            return true;
        return base.CanConvertFrom(context, sourceType);
    }

    public override object ConvertFrom(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)
    {
        if (value is CustomizableEngineExtension)
        {
            CustomizableEngineExtension extension = (CustomizableEngineExtension)value;
            return new CustomizableEngine() { PowerSetting = extension.Power };
        }
        return base.ConvertFrom(context, culture, value);
    }
}

Our XAML markup now looks like this:

<Robot Name="Tony Stark" Engine="{CustomizableEngine Power=0}" xmlns="https://ludovic.chabant.com/xaml" />

And we can change the "Power=0" bit to adjust the Power property of the engine.

MarkupExtensions are very handy to make the XAML markup shorter and more readable, but some people find it ugly when you start nesting them:

<Robot
    xmlns="https://ludovic.chabant.com/xaml"
    Name="Tony Stark"
    Engine="{CustomizableEngine Power=0}"
    Weapon="{MachinGun AmmoType={HollowPoint Diameter=7.62}, Model={SteyrAug Scope={LaserScope}}, ClipCount=10}"
    />

I personnally don’t mind too much (unless it starts looking ridiculous, which the previous example is getting pretty close). As a rule of thumb, you probably should not define a MarkupExtension for a type if:

  • The type "contains" another type: the MarkupExtension syntax is really for property initialization. If the type is a container, like a collection, a UI panel, a decorator or facade, or any other thing that "logically contains" another object, there’s no point in giving the ability to declare an instance of this type in one line, as there’s a good chance the contained type won’t be that concise.
  • The type has properties that are not "trivial": by trivial, I mean values that are either basic types (int, float, bool, string, etc.), or moderately simple structures. In that latter case, there might be a need for a small nested MarkupExtension, but it’s okay if there’s only a couple of those properties, and their types have only a couple of properties themselves.

Of course, as always in "should"-based sentences, there are exceptions. Most of the time, usage and design dictate this kind of decision. For example, your type might have several reference type properties, but what they represents means that in nominal cases, initialization is short and simple. The Binding class in WPF comes to mind: it’s a fairly complex class, but most of the time, you only set the "Path" property anyway.

Now you can start using XAML as your own serialization format! You might want to read a bit on attributes like ContentPropertyAttribute or DependsOnAttribute, too. They’re simple enough to understand, and will come in handy when you start mapping your object model to XAML.

The last thing to do is package all your XAML and resource files in an OpenXML package, and you will truly have a 21st century file format you can be proud of!


Almost everything you need to know about XAML serialization (part 1)

The XML serialization API in .NET is pretty cool, but if you really want your data to look hype and futuristic, you can try out XAML!

XAML is not just for WPF or WF, it’s actually a generic XML-based language used to describe hierarchies of .NET objects. You can therefore use it to serialize your own stuff. However, unlike the XmlSerializer which you can twist into writing and reading any XML, the XAML serializer will conform to the XAML “standard”.

Let’s start with some simple stuff. Because I saw Iron Man not too long ago, and Firefox 3, about to be released, has this “robots” theme going on, let’s describe a robot in XAML.

Here’s the Robot class, and related classes:

public class Robot
{
    public string Name { get; set; }
    public Engine Engine { get; set; }
    public Weapon Weapon { get; set; }
}
public abstract class Engine
{
    public abstract int Power { get; }
}
public abstract class Weapon
{
    public abstract int Damage { get; }
    public abstract int AmmunitionCount { get; }
}

Let’s quickly serialize it in XAML:

static void Main(string[] args)
{
    Robot r = new Robot()
    {
        Name = "Tony Stark",
        Engine = new ElectricEngine(),
        Weapon = new MachineGunWeapon()
    };
    XmlWriterSettings settings = new XmlWriterSettings();
    settings.Indent = true;
    using (XmlWriter writer = XmlWriter.Create("robot.xaml", settings))
    {
        XamlWriter.Save(r, writer);
    }
}

Note that you need to reference WindowsBase and PresentationFramework for your project to compile. The XAML serializer is an implementation part of WPF, so you need WPF even though you’re not really using it. I think that’s because the XAML serializer is specifically tweaked for WPF, the same way Silverlight has its own XAML serializer too. The XAML syntax, however, is independent from those frameworks. For now, we have to use WPF’s serializer, but maybe in the future we’ll see a more generically purposed serializer made available.

Here’s the resulting “robot.xaml” markup:

<?xml version="1.0" encoding="utf-8"?>
<Robot Name="Tony Stark" xmlns="clr-namespace:XamlFun;assembly=XamlFun">
  <Robot.Engine>
    <ElectricEngine />
  </Robot.Engine>
  <Robot.Weapon>
    <MachineGunWeapon />
  </Robot.Weapon>
</Robot>

If you’re running Visual Studio Express, remember that the output file will end up either in the Debug or Release directory depending on how you launched the program (F5 vs. Ctrl+F5), unless you told Visual Studio that you know what you’re doing.

The first thing to change is to make it look a bit more professional. What’s that ugly XML namespace? Let’s have a fancy one. This is done using the XmlnsDefinitionAttribute, set on the whole assembly:

[assembly: System.Windows.Markup.XmlnsDefinition("https://ludovic.chabant.com/xaml", "XamlFun")]

Now the markup looks slightly better:

<?xml version="1.0" encoding="utf-8"?>
<Robot Name="Tony Stark" xmlns="https://ludovic.chabant.com/xaml">
  <Robot.Engine>
    <ElectricEngine />
  </Robot.Engine>
  <Robot.Weapon>
    <MachineGunWeapon />
  </Robot.Weapon>
</Robot>

In case you have several XML namespaces in your markup (e.g. you’re serializing data from several different .NET namespaces, or different libraries and APIs), you might want to suggest a cool namespace name to use for your stuff. This is done using the XmlnsPrefixAttribute:

[assembly: System.Windows.Markup.XmlnsPrefix("https://ludovic.chabant.com/xaml", "fun")]

This will tell the XAML serializer to use an XML namespace called “fun” to serialize our stuff. This is just a suggestion, though, so it can still choose another namespace. In our case, since there’s no collision, the serializer decides that it can use the main namespace.

Now, you might complain that this markup is incredibly verbose for what it describes. I mean, XML is pretty verbose most of the time, but this is crazy. Let’s fix that.

You will notice that the “Name” property was serialized to an XML attribute, and it’s the sensible thing to do in this case. The “Engine” and “Weapon” properties, however, have been serialized to XML elements because they’re .NET objects, and the serializer doesn’t know if it can express them in a more compact way. Since XML doesn’t have much else besides elements and attributes, if you’re not happy with an element, it means you want an attribute, which means you want a string.

As a .NET programmer, you know about TypeConverters, so your first bet is to create one for the Engine type that will convert to a string. This one is very simple, and just converts the Engine instance to the name of the Engine’s implementation type.

public class EngineTypeConverter : TypeConverter
{
    public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
    {
        if (destinationType == typeof(string))
            return true;
        return base.CanConvertTo(context, destinationType);
    }
    public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)
    {
        if (destinationType == typeof(string))
        {
            Engine engine = (Engine)value;
            return engine.GetType().Name;
        }
        return base.ConvertTo(context, culture, value, destinationType);
    }
}

Then you decorate the Engine type appropriately (if you want to use this converter only for the Robot’s Engine property, and not for all instances of Engine, you can put the TypeConverterAttribute on the property instead of the type):

[TypeConverter(typeof(EngineTypeConverter))]
public abstract class Engine
{
    public abstract int Power { get; }
}

However, this doesn’t change much. Running the program gives you the same markup as before. This is because the XAML serializer is not completely stupid, and won’t serialize something it won’t be able to deserialize later. In our case, we don’t say how we can also convert a string to an Engine! Adding the following lines to the EngineTypeConverter fixes the problem, even though we don’t implement the ConvertFrom method:

public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
{
    if (sourceType == typeof(string))
        return true;
    return base.CanConvertFrom(context, sourceType);
}

Running the program with breakpoints in both CanConvertTo and CanConvertFrom tells us that, indeed, the XAML serializer is querying for conversion to and from string. It’s actually asking for this quite a few times, and I’m wondering if this could be optimized in a future service pack for .NET 3.0… You’ll also realize that it’s querying for conversion to and from another type, MarkupExtension, but we’ll get to this in part 2 of this series.

Now here’s our markup:

<Robot Name="Tony Stark" Engine="ElectricEngine" xmlns="https://ludovic.chabant.com/xaml">
  <Robot.Weapon>
    <MachineGunWeapon />
  </Robot.Weapon>
</Robot>

Much better, but not quite superb either. The problem here is that we can’t customize much of the engine. Our XML attribute only specifies the type of engine we’re using, and if we were to implement the ConvertFrom method on the EngineTypeConverter, we would only be able to build a new instance of the given type, and that’s it. It wouldn’t even quite work well because we’re not printing the full name of the type.

The problem we have here is that we want to make it easy for users and designer applications to read and write our robot markup. If we have a few “built-in” engines, we want it to be “natural” to use them.

Let’s say we have the following built-in engines: an electric engine, a nuclear engine, and a coal engine (because I love steampunk stuff).

public class ElectricEngine : Engine
{
    public override int Power
    {
        get { return 2000; }
    }
}
public class NuclearEngine : Engine
{
    public override int Power
    {
        get { return 10000; }
    }
}
public class CoalEngine : Engine
{
    public override int Power
    {
        get { return 8; }
    }
}

It would be nice if we could write XAML like this:

<Robot Name="Tony Stark" Engine="Nuclear" />

But on the other hand, if we want to use a more complex engine like the following CustomizableEngine, we want it to revert back to the XML element based syntax:

public class CustomizableEngine : Engine
{
    private int mPower;
    public override int Power
    {
        get { return mPower; }
    }
    public int PowerSetting
    {
        get { return mPower; }
        set { mPower = value; }
    }
}
<Robot Name="Tony Stark" xmlns="https://ludovic.chabant.com/xaml">
  <Robot.Engine>
    <CustomizableEngine PowerSetting="40" />
  </Robot.Engine>
  <Robot.Weapon>
    <MachineGunWeapon />
  </Robot.Weapon>
</Robot>

We could even imagine that the built-in engine types can be customized too, but if you want the default values, you can use the simpler syntax.

The WPF designers ran into the same problem for things like brushes and colours, for which they wanted you to be able to just write “Red” and “Green” and “DarkOlive”, and have the appropriate graphic object be created. To this end, they created the ValueSerializer class. This class is a bit similar to TypeConverter, but only converts to and from strings. The big difference, however, is that the object to convert is passed as a parameter to the CanConvertTo method, which is not the case with the TypeConverter.

public class EngineValueSerializer : ValueSerializer
{
    public override bool CanConvertToString(object value, IValueSerializerContext context)
    {
        if (value is NuclearEngine ||
            value is ElectricEngine ||
            value is CoalEngine)
            return true;
        return base.CanConvertToString(value, context);
    }
    public override string ConvertToString(object value, IValueSerializerContext context)
    {
        return value.GetType().Name.Replace("Engine", "");
    }
}

Similarly to the TypeConverter, you need to decorate either the type or the property with the ValueSerializerAttribute. You can even leave the TypeConverterAttribute because the XAML serializer will give priority to the ValueSerializer.

[TypeConverter(typeof(EngineTypeConverter))]
[ValueSerializer(typeof(EngineValueSerializer))]
public abstract class Engine
{
    public abstract int Power { get; }
}

Now we can say that if the object to convert is one of our 3 built-in types, we can convert it to a string which would be “Nuclear”, “Electric” or “Coal” (the type name without the “Engine” suffix). If it’s something else (like our CustomizableEngine), it won’t convert it, and revert back to the default XAML syntax, which uses XML elements. Obviously, for real code that has many built-in types, you will need to replace that ugly “if” statement by some lookup in a table, or inspecting the type for some custom attribute, or something.

Well, that’s all wonderful, and if we apply the same thing to the Weapon class, we can end up with some nice markup:

<Robot xmlns="https://ludovic.chabant.com/xaml"
    Name="Tony Stark"
    Engine="Nuclear"
    Weapon="MachineGun" />

But if we want to customize the engine or the weapon, we need to switch to that verbose syntax:

<Robot Name="Tony Stark" xmlns="https://ludovic.chabant.com/xaml">
  <Robot.Engine>
    <CustomizableEngine PowerSetting="12" />
  </Robot.Engine>
  <Robot.Weapon>
    <LaserWeapon BeamColor="Red" />
  </Robot.Weapon>
</Robot>

In part 2 of this series, we’ll see how we can use the MarkupExtension type to keep some nicely short syntax even when specifying values for object properties. Stay tuned!


XAML markup is for real men

The Visual Studio designer for WPF is quite lame as it stands now, and although it will get a lot better when Visual Studio 2008 SP1 is released, if you’re a WPF developer, you really need to know how to read and write XAML by hand, much like web developers and designers know how to read and write HTML and CSS by hand.

Since you’re not a wussy, what you actually want is for Visual Studio to always show a full view of the XAML markup when you open a XAML document. It’s also a good thing because you won’t have to wait for a few seconds for the designer to load the first time.

To do this in Visual Studio 2008 Express, go to the Options dialog, check “Show all settings“, and go to “Text Editor“, “XAML“, “Miscellaneous“. There, check the “Always open documents in full XAML view“.

Voilà!


Custom provider attributes in a configuration element (part 2)

In the first part of this little series, we implemented a simple, read-only way to get custom attributes from a configuration element, using a provider pattern use case. We ended trying to modify the configuration file, without much success.

Right now, we have the following method, called at the end of the program:

private static void SimulateConfigurationChange()
{
    var configuration = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
    var section = configuration.GetSection("cookieFactory") as CookieFactoryConfigurationSection;
    section.CookieProvider.Options["maxCookies"] = "6";
    configuration.Save();
}

This doesn’t change anything to the configuration file, even though we’re calling the Save method. This is because .NET’s configuration is kind of smart, and realizes that nothing has changed, therefore nothing needs to be written. How can it be that nothing has changed, you say, since we modified the Options collection? Oh, but this collection can’t be "seen" by the Configuration class because it’s just a local property, unlike the other one, Type, who’s a property wrapper around ConfigurationElement‘s internal property system:

public NameValueCollection Options { get; private set; }

[ConfigurationProperty("type", IsRequired = true)]
// Validation/conversion attributes removed for readability...
public Type Type
{
    get { return this["type"] as Type; }
    set { this["type"] = value; }
}

Even if we force the configuration to save itself, using some overloads of the Save method, we end up with a configuration file that lost its custom attributes, leaving only the provider type attribute, because the configuration doesn’t know about those options.

We can’t turn Options into a ConfigurationProperty, though, because it would mean that we had a nested collection inside our "cookieProvider" XML element. We don’t want that. We want to dynamically add new properties to ConfigurationElement.

Looking at the ConfigurationElement class, we can spot something promising: there’s a virtual property called Properties which, it seems, contains the element’s configuration properties. We can dynamically add items in it:

public class CookieProviderConfigurationElement : ConfigurationElement
{
    // Properties...
    // Constructor...

    protected override bool OnDeserializeUnrecognizedAttribute(string name, string value)
    {
        if (!Properties.Contains(name))
            Properties.Add(new ConfigurationProperty(name, typeof(string), null));
        this[name] = value;
        Options[name] = value;
        return true;
    }

    // ValidateProviderType...
}

Now if we force a configuration save, we keep our custom attributes!

private static void SimulateConfigurationChange()
{
    var configuration = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
    var section = configuration.GetSection("cookieFactory") as CookieFactoryConfigurationSection;
    section.CookieProvider.Options["maxCookies"] = "6";
    configuration.Save(ConfigurationSaveMode.Minimal, true);    // Force a minimal configuration change
}
<configuration>
  <configSections>
    <section name="cookieFactory" type="ConfigurationTest.CookieFactoryConfigurationSection, ConfigurationTest" />
  </configSections>
  <cookieFactory>
    <cookieProvider type="ConfigurationTest.SimpleCookieProvider, ConfigurationTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
      maxCookies="5" flavor="chocolate" />
  </cookieFactory>
  <system.web>
    <trust level="Full" />
    <webControls clientScriptsLocation="/aspnet_client/{0}/{1}/" />
  </system.web>
</configuration>

We get a bit of additional stuff by forcing a save, but that doesn’t matter.

However, we notice that the "maxCookie" parameter is still "5", even though we set it to "6". Well, that’s no surprise given that, once again, we set that value on the local Options property that’s unknown to the configuration API. We have to somehow modify the internal Properties collection when we modify the options.

At this point, I should let the guy in the audience who’s been raising his hand since the beginning ask his question: "what about the ProviderSettings class?". Well, the ProviderSettings is actually almost what we want. It has a NameValueCollection property called Parameters, and if you modify it, you get a correctly modified configuration when you save. However, it has hard coded, required, "name" and "type" configuration properties, which means that if your base provider class has more or less stuff, you’ll have to write your own ConfigurationElement class from scratch anyway (inheriting from ProviderSettings won’t help because you’ll end up with your other "standard" configuration properties in the options collection, which might be confusing).

Besides, isn’t it fun to reinvent the wheel to learn how it works? Let’s resume our investigations, then.

Now we have a few different choices:

  • We could write a ConfigurationOptionCollection class with a similar interface to that of NameValueCollection. This class would be linked to a ConfigurationElement, and would keep its items in sync with the element’s Properties collection. The problem with this approach is that the Properties property is protected, so we would need the ConfigurationElement to also implement an interface or inherit another abstract class that would give our ConfigurationOptionCollection a way to manipulate it. We would also need that interface or abstract class to give a way to prevent illegal actions, like adding an option that has the same name of a regular configuration property (in our case, for example, adding an option called "type", which would collide with the provider type required property). That’s 2 additional classes, and some complexity to keep things in sync. Since most use cases for configuration are either read-only, or read and then write in 2 separate, one-shot, operations, we will rarely need to keep things in sync.
  • We could create an abstract subclass to ConfigurationElement that would keep track of options for us. Every time the configuration properties are accessed, the NameValueCollection for the options is rebuilt. This is how the ProviderSettings works.

Here, I went with a simpler version of the second choice, where I replaced the Options property with a couple of methods. This is purely so there’s minimal code to make it work, and we can see what are the important bits. Introducing the LooseConfigurationElement (yeah, I’m bad with names):

public abstract class LooseConfigurationElement : ConfigurationElement
{
    protected LooseConfigurationElement()
    {
    }

    public void SetOption(string name, string value)
    {
        if (!this.Properties.Contains(name))
        {
            ConfigurationProperty optionProperty = new ConfigurationProperty(name, typeof(String), null);
            this.Properties.Add(optionProperty);
        }
        this[name] = value;
    }

    public NameValueCollection GetOptions()
    {
        NameValueCollection options = new NameValueCollection();
        foreach (ConfigurationProperty property in this.Properties)
        {
            if (IsOptionProperty(property))
            {
                options.Add(property.Name, this[property.Name] as string);
            }
        }
        return options;
    }

    protected override bool OnDeserializeUnrecognizedAttribute(string name, string value)
    {
        SetOption(name, value);
        return true;
    }

    protected abstract bool IsOptionProperty(ConfigurationProperty property);
}

And the refactored CookieProviderConfigurationElement:

public class CookieProviderConfigurationElement : LooseConfigurationElement
{
    private static ConfigurationProperty sTypeProperty =
        new ConfigurationProperty(
            "type",
            typeof(Type),
            null,
            ConfigurationPropertyOptions.IsRequired);

    [ConfigurationProperty("type", IsRequired = true)]
    [TypeConverter(typeof(TypeNameConverter))]
    [CallbackValidator(Type = typeof(CookieProviderConfigurationElement), CallbackMethodName = "ValidateProviderType")]
    public Type Type
    {
        get { return this[sTypeProperty] as Type; }
        set { this[sTypeProperty] = value; }
    }

    public static void ValidateProviderType(object type)
    {
        if (!typeof(ICookieProvider).IsAssignableFrom((Type)type))
        {
            throw new ConfigurationErrorsException("The cookie provider must implement the ICookieProvider interface.");
        }
    }

    protected override bool IsOptionProperty(ConfigurationProperty property)
    {
        if (property == sTypeProperty)
            return false;
        return true;
    }
}

The LooseConfigurationElement is weak in several aspects, like for example the fact that it’s not thread-safe. But it fulfills our requirements, and you can see how we can easily play with which configuration properties are "standard" and which ones are "options" through the IsOptionProperty method. This is a small but useful improvement over the ProviderSettings class, which is only written for the ProviderBase class.

You can improve on the LooseConfigurationElement pretty easily:

  • First, you might want to not rebuild the Options collection every time a client asks for it. There are only 2 ways for this collection to be modified: a client modifies it directly, or somebody adds optional properties to the internal Properties collection. Since this second situation only happens in OnDeserializeUnrecognizedAttribute, we can say that we, in fact, almost never have to rebuild the Options collection!
  • However, it also means that we need to keep the internal Properties collection in sync when people modify the Options collection. Since NameValueCollection doesn’t have any dirty flag, you can either create your own with this feature, use something like ObservableCollection instead, or go down the ProviderSettings route who partially rebuilds the Properties collection every time it is accessed. In this case, we’re almost trading rebuilding one collection for rebuilding another… Also, you’ll have to juggle with options already added as ConfigurationProperties, new ones that are only in name/value form, and those that have been removed. But you’re smart, you can do that.

Well that’s it, we’ve got all the info we need. I hope it was useful for some of you!