The Stochastic Game

Ramblings of General Geekery

WPF application patterns

There's quite a few articles about using the MVC, MVP, MVVM and other MSomethingSomething patterns with WPF applications out there, but I think there's still quite a few things to say on the subject before we can implement all those wonderful acronyms from A to Z without too much worry.

The goal of each of those patterns is to achieve separation of concerns for code reuse, scalability, extensibility, and testability, along with that warm fuzzy feeling you get when you look at your code and you think "wow, this is nice". However, which one should I use for this or that application? What are the pros and cons of each? How do they apply to WPF apps, and how do I implement them?

The View and its Model

Let's start with what we already have: windows and controls. In all those acronymed patterns, this is called the "View".

WPF gives us something with its data binding engine:

<Button Content="{Binding Name}" />

Whatever you bind to the Button needs to have a "Name" property, but there's no requirement on the types or interfaces of either the bound object, or the "Name" property. Of course, in more complex scenarios, things get, well, more complex, but only regarding the type of the properties. In this case, there's a good chance "Name" is assumed to be a System.String, but the object that has a Name can be anything.

This is pretty similar to duck-typing, and gives us a level of abstraction that allows us to switch data bound objects during testing, or even within the application itself, giving to the same UI element objects that behave differently, but look similar. The problem is that there's no way to really enforce the "public aspect" of the data bound object other than by launching the application and looking at the debugging output for some error and warning messages. This is like working with template classes in C++ where you can't know whether you can pass your class to a templated method or class until you've compiled your code and looked at the error messages (or absence thereof). I don't think it's a super nice way of working. IMHO it's better in, say, .NET, where you have constraints on generic classes, which effectively makes you lose the "duck-typing" aspect to C++ templates, but lets you worry less (and gives you all kinds of IDE features to improve your productivity). In our UI application, it's therefore probably better to "enforce" what we expect for our data bound objects by declaring an interface, unless you like looking at the debug output messages.

Here, whatever we bind to the view needs to have a name. The view interface probably has a ViewData property, or SetViewData method, that is of type INamed.

public interface INamed
{
string Name { get; }
}

Those data bound objects are collectively know as the "Model", but this is where some patterns diverge.

If your model is simple enough you won't need to go through an interface. The interface is really needed when unit testing a view can't be done with the actual model, or when the model has dependencies you want to hide. For example, if a model object generates queries to a database, or has other such side effects, you'll have to mock it during your tests. This means that either the object itself is mockable somewhere (disable database queries, or mock the database itself), or you need some wrapper that will expose it indirectly to the view. The wrapper can just be an interface that the model implements, or it can be a proxy object. On the other hand, if the object is "dumb", or has no undesired dependencies, you can use it even during the unit tests.

Mocking the inside of a complex model can lead to difficult to setup tests, where you need to mock the database and the logger and the netwok and whatnot, so it's only useful when you know it won't get out of hand. Some people solve the problem by making the view so empty (in terms of logic code) that there's no point in testing it. The ideal case in WPF is when the view is all XAML markup, with no code behind.

If you're using a proxy or wrapping object around the model, this is known as either the "Presentation Model", or the "View Model". It's a model that you craft specifically for the view, as opposed to the "business" model. In our case, it's frequently an object that implements all kinds of WPF-friendly stuff, like the INotifyPropertyChanged and INotifyCollectionChanged interfaces, or exposes properties that make it easier for WPF designer to write nice templates and styles. For example, you might have the following model:

public enum ModelFlags
{
SomeFlag,
SomeOtherFlag,
YetAnotherFlag
}
public class Model
{
public ModelFlags Flags { get; set; }
}

The ModelProxy class, that you pass to the view, might have the following properties:

public class ModelProxy : INotifyPropertyChanged
{
public bool HasSomeFlag { get; }
public bool HasSomeOtherFlag { get; }
public bool HasYetAnotherFlag { get; }
public ModelProxy(Model model)
{
// wrap the model...
}
}

Those 3 boolean properties will make it easy to dynamically change the look of your UI based on the model's state, because they're easy to hook up to style triggers in XAML markup. If you manage to implement INotifyPropertyChanged for those properties, that's even better, because the UI will change at runtime as the model gets updated.

Controllers and Presenters

Now that we have our views, and the model they manipulate and expose (whether it's the actual model, or some view/presentation model), we need someone to implement the application logic, respond to user input, and take care of the flow that leads the user from view to view. Depending on the pattern you're using, those classes are "Controllers" or "Presenters". Controllers typically handle business logic, whereas presenters handle user input too. I'll only refer to Controllers here to prevent awkward wording.

In WPF there's a few tools that fit quite well with those patterns. The first one is the command mechanism.

<Button Content="{Binding Name}" Command="{x:Static ApplicationCommands.Properties}" />

In this case, our button now triggers the "Properties" command. You can also pass a CommandParameter to the command when the button is invoked. You would typically use the data bound object as the parameter, so use that Binding markup extension there too.

The WPF commands have been quite extensively documented and discussed, so there's no need to explain how they work here. However, there's a few things worth noting.

First, the controller is going to be a class that has a reference to the view it's responsible for, probably through an interface so you can test it with a mocked view. This means the controller is not in WPF's logical or visual trees, so if you're using commands that bubble or tunnel up or down those trees (which is the case for the commonly used RoutedCommand and RoutedUICommand class), the controller can't be notified. There are mainly 2 ways to solve this problem:

  • The view interface exposes a CommandBindings property through which the controller can add its own command handlers. This is really just exposing the already existing CommandBindings property that you can find on almost all WPF controls. This means the view is the one that will really catch the command invocation, but the delegates you're registering will be the controller's. The advantage is that it's super ultra easy to setup. The disadvantage is that only a view's controller can listen to a given command emitted from that view. You could get around this by having a mechanism to get access to other views, but it can become quite messy for highly modular UIs.
  • The other solution is to implement your own kind of commands, as WPF really only knows about the ICommand interface, which was a pretty good move from the guys who designed it. Take a look at the recently released Composite Application Guidance for WPF, where they have commands specifically written for being handled by non UI classes in a modular application.

Second, you might ask the seemingly stupid question: "Who came first? The view or the controller?".

The problem is that the view and the controller both have a reference to each other. The view needs to give the controller some context for it to figure out what to do, and then needs to drive the view into doing what it decided. Using custom written commands in simple scenarios will probably let you avoid keeping a reference in one or the other, but more often than not, you'll need a cyclic reference. This has some impact when using inversion of control or dependency injection frameworks, because this means the view will take the controller as a constructor parameter, or the other way around. This means you "pull" one out of your factory, and the other one gets "pulled" in order to be passed as a dependency.

So far, I haven't found any compelling argument for one or the other. What I do is pass the controller as a constructor parameter to the view because, in my mind, the view comes first. Whether a controller gets attached or not just means you get a functional or completely dead UI. Also, I find that I reuse views with different controllers more often than the other way around, so that's leads to an easier setup of my dependency injection container. You can find arguments for the other way, too. For example, if you consider that the ideal case for a view is to be only plain XAML markup, then the view shouldn't have by default any reference to the controller. This means in turn that the view should be passed as a constructor parameter to the controller.

Some people make the controller also be the view model. They set the view's DataContext to be the instance of the controller responsible for that view. Of course, this is done through an interface. Note that you can play around, here, with 2 interfaces (one for the controller "role" and one for the view model "role") that the controller class both implements. This will be easier to mock for unit tests, and to set up the dependency injection container in the application. Anyway, this can have some advantages, but it prevents reuse of view models, which happens a lot whether it's because you pass the same type to different views, or because you use compositon or inheritance between view model classes.

So what, then?

Well, I hope I gave a good overview of the options that are available when designing a WPF application. It really starts with the UI itself, and commands and data binding, because that's some good stuff you get for free. Design your application around those concepts, and depending on the model you want to manipulate (legacy business model, model you're going to build along with the application, model that comes from a library you don't own, etc.), adapt yourself by either using the PresenterModel or ViewModel patterns as needed. Keep an eye out for testability, and extensibility.

We'll get to more concrete examples in future posts.


Almost everything you need to know about XAML serialization (part 2)

In part one of this little simple series, we saw how to use XAML as a file format for our own custom types. However, we wanted to reduce the verbosity of XML for specifying objects with only a few properties. This can be done with MarkupExtensions.

We saw that the XAML serializer is asking if our types can be converted to the MarkupExtension class. Let’s give our CustomizableEngine class the ability to do that:

    [TypeConverter(typeof(CustomizableEngineTypeConverter))]
    public class CustomizableEngine : Engine

The implementation of the type converter is a pretty simple skeleton:

public class CustomizableEngineTypeConverter : TypeConverter
{
    public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
    {
        if (destinationType == typeof(MarkupExtension))
            return true;
        return base.CanConvertTo(context, destinationType);
    }

    public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)
    {
        if (destinationType == typeof(MarkupExtension))
        {
            throw new NotImplementedException();
        }
        return base.ConvertTo(context, culture, value, destinationType);
    }
    
    public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
    {
        if (sourceType == typeof(MarkupExtension))
            return true;
        return base.CanConvertFrom(context, sourceType);
    }

    public override object ConvertFrom(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)
    {
        if (value is CustomizableEngineExtension)
        {
            throw new NotImplementedException();
        }
        return base.ConvertFrom(context, culture, value);
    }
}

Now we just need to know what’s a MarkupExtension and how to build one. Since this is well explained in MSDN there’s no need to detail it here, but to summarize, it’s a class that, by convention, ends in "Extension", and whose each instance can provide a value of another type when asked for it. The XAML serializer will create and initialize a MarkupExtension based on some standard curly braces based syntax.

So it really all boils down to creating our own MarkupExtension class for the CustomizableEngine class:

public class CustomizableEngineExtension : MarkupExtension
{
    public int Power { get; set; }

    public override object ProvideValue(IServiceProvider serviceProvider)
    {
        return new CustomizableEngine() { PowerSetting = Power };
    }
}

Now we can fix the type converter class:

public class CustomizableEngineTypeConverter : TypeConverter
{
    public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
    {
        if (destinationType == typeof(MarkupExtension))
            return true;
        return base.CanConvertTo(context, destinationType);
    }

    public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)
    {
        if (destinationType == typeof(MarkupExtension))
        {
            CustomizableEngine engine = (CustomizableEngine)value;
            CustomizableEngineExtension extension = new CustomizableEngineExtension();
            return extension;
        }
        return base.ConvertTo(context, culture, value, destinationType);
    }
    
    public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
    {
        if (sourceType == typeof(MarkupExtension))
            return true;
        return base.CanConvertFrom(context, sourceType);
    }

    public override object ConvertFrom(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)
    {
        if (value is CustomizableEngineExtension)
        {
            CustomizableEngineExtension extension = (CustomizableEngineExtension)value;
            return new CustomizableEngine() { PowerSetting = extension.Power };
        }
        return base.ConvertFrom(context, culture, value);
    }
}

Our XAML markup now looks like this:

<Robot Name="Tony Stark" Engine="{CustomizableEngine Power=0}" xmlns="https://ludovic.chabant.com/xaml" />

And we can change the "Power=0" bit to adjust the Power property of the engine.

MarkupExtensions are very handy to make the XAML markup shorter and more readable, but some people find it ugly when you start nesting them:

<Robot
    xmlns="https://ludovic.chabant.com/xaml"
    Name="Tony Stark"
    Engine="{CustomizableEngine Power=0}"
    Weapon="{MachinGun AmmoType={HollowPoint Diameter=7.62}, Model={SteyrAug Scope={LaserScope}}, ClipCount=10}"
    />

I personnally don’t mind too much (unless it starts looking ridiculous, which the previous example is getting pretty close). As a rule of thumb, you probably should not define a MarkupExtension for a type if:

  • The type "contains" another type: the MarkupExtension syntax is really for property initialization. If the type is a container, like a collection, a UI panel, a decorator or facade, or any other thing that "logically contains" another object, there’s no point in giving the ability to declare an instance of this type in one line, as there’s a good chance the contained type won’t be that concise.
  • The type has properties that are not "trivial": by trivial, I mean values that are either basic types (int, float, bool, string, etc.), or moderately simple structures. In that latter case, there might be a need for a small nested MarkupExtension, but it’s okay if there’s only a couple of those properties, and their types have only a couple of properties themselves.

Of course, as always in "should"-based sentences, there are exceptions. Most of the time, usage and design dictate this kind of decision. For example, your type might have several reference type properties, but what they represents means that in nominal cases, initialization is short and simple. The Binding class in WPF comes to mind: it’s a fairly complex class, but most of the time, you only set the "Path" property anyway.

Now you can start using XAML as your own serialization format! You might want to read a bit on attributes like ContentPropertyAttribute or DependsOnAttribute, too. They’re simple enough to understand, and will come in handy when you start mapping your object model to XAML.

The last thing to do is package all your XAML and resource files in an OpenXML package, and you will truly have a 21st century file format you can be proud of!


Almost everything you need to know about XAML serialization (part 1)

The XML serialization API in .NET is pretty cool, but if you really want your data to look hype and futuristic, you can try out XAML!

XAML is not just for WPF or WF, it’s actually a generic XML-based language used to describe hierarchies of .NET objects. You can therefore use it to serialize your own stuff. However, unlike the XmlSerializer which you can twist into writing and reading any XML, the XAML serializer will conform to the XAML “standard”.

Let’s start with some simple stuff. Because I saw Iron Man not too long ago, and Firefox 3, about to be released, has this “robots” theme going on, let’s describe a robot in XAML.

Here’s the Robot class, and related classes:

public class Robot
{
    public string Name { get; set; }
    public Engine Engine { get; set; }
    public Weapon Weapon { get; set; }
}
public abstract class Engine
{
    public abstract int Power { get; }
}
public abstract class Weapon
{
    public abstract int Damage { get; }
    public abstract int AmmunitionCount { get; }
}

Let’s quickly serialize it in XAML:

static void Main(string[] args)
{
    Robot r = new Robot()
    {
        Name = "Tony Stark",
        Engine = new ElectricEngine(),
        Weapon = new MachineGunWeapon()
    };
    XmlWriterSettings settings = new XmlWriterSettings();
    settings.Indent = true;
    using (XmlWriter writer = XmlWriter.Create("robot.xaml", settings))
    {
        XamlWriter.Save(r, writer);
    }
}

Note that you need to reference WindowsBase and PresentationFramework for your project to compile. The XAML serializer is an implementation part of WPF, so you need WPF even though you’re not really using it. I think that’s because the XAML serializer is specifically tweaked for WPF, the same way Silverlight has its own XAML serializer too. The XAML syntax, however, is independent from those frameworks. For now, we have to use WPF’s serializer, but maybe in the future we’ll see a more generically purposed serializer made available.

Here’s the resulting “robot.xaml” markup:

<?xml version="1.0" encoding="utf-8"?>
<Robot Name="Tony Stark" xmlns="clr-namespace:XamlFun;assembly=XamlFun">
  <Robot.Engine>
    <ElectricEngine />
  </Robot.Engine>
  <Robot.Weapon>
    <MachineGunWeapon />
  </Robot.Weapon>
</Robot>

If you’re running Visual Studio Express, remember that the output file will end up either in the Debug or Release directory depending on how you launched the program (F5 vs. Ctrl+F5), unless you told Visual Studio that you know what you’re doing.

The first thing to change is to make it look a bit more professional. What’s that ugly XML namespace? Let’s have a fancy one. This is done using the XmlnsDefinitionAttribute, set on the whole assembly:

[assembly: System.Windows.Markup.XmlnsDefinition("https://ludovic.chabant.com/xaml", "XamlFun")]

Now the markup looks slightly better:

<?xml version="1.0" encoding="utf-8"?>
<Robot Name="Tony Stark" xmlns="https://ludovic.chabant.com/xaml">
  <Robot.Engine>
    <ElectricEngine />
  </Robot.Engine>
  <Robot.Weapon>
    <MachineGunWeapon />
  </Robot.Weapon>
</Robot>

In case you have several XML namespaces in your markup (e.g. you’re serializing data from several different .NET namespaces, or different libraries and APIs), you might want to suggest a cool namespace name to use for your stuff. This is done using the XmlnsPrefixAttribute:

[assembly: System.Windows.Markup.XmlnsPrefix("https://ludovic.chabant.com/xaml", "fun")]

This will tell the XAML serializer to use an XML namespace called “fun” to serialize our stuff. This is just a suggestion, though, so it can still choose another namespace. In our case, since there’s no collision, the serializer decides that it can use the main namespace.

Now, you might complain that this markup is incredibly verbose for what it describes. I mean, XML is pretty verbose most of the time, but this is crazy. Let’s fix that.

You will notice that the “Name” property was serialized to an XML attribute, and it’s the sensible thing to do in this case. The “Engine” and “Weapon” properties, however, have been serialized to XML elements because they’re .NET objects, and the serializer doesn’t know if it can express them in a more compact way. Since XML doesn’t have much else besides elements and attributes, if you’re not happy with an element, it means you want an attribute, which means you want a string.

As a .NET programmer, you know about TypeConverters, so your first bet is to create one for the Engine type that will convert to a string. This one is very simple, and just converts the Engine instance to the name of the Engine’s implementation type.

public class EngineTypeConverter : TypeConverter
{
    public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
    {
        if (destinationType == typeof(string))
            return true;
        return base.CanConvertTo(context, destinationType);
    }
    public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)
    {
        if (destinationType == typeof(string))
        {
            Engine engine = (Engine)value;
            return engine.GetType().Name;
        }
        return base.ConvertTo(context, culture, value, destinationType);
    }
}

Then you decorate the Engine type appropriately (if you want to use this converter only for the Robot’s Engine property, and not for all instances of Engine, you can put the TypeConverterAttribute on the property instead of the type):

[TypeConverter(typeof(EngineTypeConverter))]
public abstract class Engine
{
    public abstract int Power { get; }
}

However, this doesn’t change much. Running the program gives you the same markup as before. This is because the XAML serializer is not completely stupid, and won’t serialize something it won’t be able to deserialize later. In our case, we don’t say how we can also convert a string to an Engine! Adding the following lines to the EngineTypeConverter fixes the problem, even though we don’t implement the ConvertFrom method:

public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
{
    if (sourceType == typeof(string))
        return true;
    return base.CanConvertFrom(context, sourceType);
}

Running the program with breakpoints in both CanConvertTo and CanConvertFrom tells us that, indeed, the XAML serializer is querying for conversion to and from string. It’s actually asking for this quite a few times, and I’m wondering if this could be optimized in a future service pack for .NET 3.0… You’ll also realize that it’s querying for conversion to and from another type, MarkupExtension, but we’ll get to this in part 2 of this series.

Now here’s our markup:

<Robot Name="Tony Stark" Engine="ElectricEngine" xmlns="https://ludovic.chabant.com/xaml">
  <Robot.Weapon>
    <MachineGunWeapon />
  </Robot.Weapon>
</Robot>

Much better, but not quite superb either. The problem here is that we can’t customize much of the engine. Our XML attribute only specifies the type of engine we’re using, and if we were to implement the ConvertFrom method on the EngineTypeConverter, we would only be able to build a new instance of the given type, and that’s it. It wouldn’t even quite work well because we’re not printing the full name of the type.

The problem we have here is that we want to make it easy for users and designer applications to read and write our robot markup. If we have a few “built-in” engines, we want it to be “natural” to use them.

Let’s say we have the following built-in engines: an electric engine, a nuclear engine, and a coal engine (because I love steampunk stuff).

public class ElectricEngine : Engine
{
    public override int Power
    {
        get { return 2000; }
    }
}
public class NuclearEngine : Engine
{
    public override int Power
    {
        get { return 10000; }
    }
}
public class CoalEngine : Engine
{
    public override int Power
    {
        get { return 8; }
    }
}

It would be nice if we could write XAML like this:

<Robot Name="Tony Stark" Engine="Nuclear" />

But on the other hand, if we want to use a more complex engine like the following CustomizableEngine, we want it to revert back to the XML element based syntax:

public class CustomizableEngine : Engine
{
    private int mPower;
    public override int Power
    {
        get { return mPower; }
    }
    public int PowerSetting
    {
        get { return mPower; }
        set { mPower = value; }
    }
}
<Robot Name="Tony Stark" xmlns="https://ludovic.chabant.com/xaml">
  <Robot.Engine>
    <CustomizableEngine PowerSetting="40" />
  </Robot.Engine>
  <Robot.Weapon>
    <MachineGunWeapon />
  </Robot.Weapon>
</Robot>

We could even imagine that the built-in engine types can be customized too, but if you want the default values, you can use the simpler syntax.

The WPF designers ran into the same problem for things like brushes and colours, for which they wanted you to be able to just write “Red” and “Green” and “DarkOlive”, and have the appropriate graphic object be created. To this end, they created the ValueSerializer class. This class is a bit similar to TypeConverter, but only converts to and from strings. The big difference, however, is that the object to convert is passed as a parameter to the CanConvertTo method, which is not the case with the TypeConverter.

public class EngineValueSerializer : ValueSerializer
{
    public override bool CanConvertToString(object value, IValueSerializerContext context)
    {
        if (value is NuclearEngine ||
            value is ElectricEngine ||
            value is CoalEngine)
            return true;
        return base.CanConvertToString(value, context);
    }
    public override string ConvertToString(object value, IValueSerializerContext context)
    {
        return value.GetType().Name.Replace("Engine", "");
    }
}

Similarly to the TypeConverter, you need to decorate either the type or the property with the ValueSerializerAttribute. You can even leave the TypeConverterAttribute because the XAML serializer will give priority to the ValueSerializer.

[TypeConverter(typeof(EngineTypeConverter))]
[ValueSerializer(typeof(EngineValueSerializer))]
public abstract class Engine
{
    public abstract int Power { get; }
}

Now we can say that if the object to convert is one of our 3 built-in types, we can convert it to a string which would be “Nuclear”, “Electric” or “Coal” (the type name without the “Engine” suffix). If it’s something else (like our CustomizableEngine), it won’t convert it, and revert back to the default XAML syntax, which uses XML elements. Obviously, for real code that has many built-in types, you will need to replace that ugly “if” statement by some lookup in a table, or inspecting the type for some custom attribute, or something.

Well, that’s all wonderful, and if we apply the same thing to the Weapon class, we can end up with some nice markup:

<Robot xmlns="https://ludovic.chabant.com/xaml"
    Name="Tony Stark"
    Engine="Nuclear"
    Weapon="MachineGun" />

But if we want to customize the engine or the weapon, we need to switch to that verbose syntax:

<Robot Name="Tony Stark" xmlns="https://ludovic.chabant.com/xaml">
  <Robot.Engine>
    <CustomizableEngine PowerSetting="12" />
  </Robot.Engine>
  <Robot.Weapon>
    <LaserWeapon BeamColor="Red" />
  </Robot.Weapon>
</Robot>

In part 2 of this series, we’ll see how we can use the MarkupExtension type to keep some nicely short syntax even when specifying values for object properties. Stay tuned!


XAML markup is for real men

The Visual Studio designer for WPF is quite lame as it stands now, and although it will get a lot better when Visual Studio 2008 SP1 is released, if you’re a WPF developer, you really need to know how to read and write XAML by hand, much like web developers and designers know how to read and write HTML and CSS by hand.

Since you’re not a wussy, what you actually want is for Visual Studio to always show a full view of the XAML markup when you open a XAML document. It’s also a good thing because you won’t have to wait for a few seconds for the designer to load the first time.

To do this in Visual Studio 2008 Express, go to the Options dialog, check “Show all settings“, and go to “Text Editor“, “XAML“, “Miscellaneous“. There, check the “Always open documents in full XAML view“.

Voilà!


Custom provider attributes in a configuration element (part 2)

In the first part of this little series, we implemented a simple, read-only way to get custom attributes from a configuration element, using a provider pattern use case. We ended trying to modify the configuration file, without much success.

Right now, we have the following method, called at the end of the program:

private static void SimulateConfigurationChange()
{
    var configuration = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
    var section = configuration.GetSection("cookieFactory") as CookieFactoryConfigurationSection;
    section.CookieProvider.Options["maxCookies"] = "6";
    configuration.Save();
}

This doesn’t change anything to the configuration file, even though we’re calling the Save method. This is because .NET’s configuration is kind of smart, and realizes that nothing has changed, therefore nothing needs to be written. How can it be that nothing has changed, you say, since we modified the Options collection? Oh, but this collection can’t be "seen" by the Configuration class because it’s just a local property, unlike the other one, Type, who’s a property wrapper around ConfigurationElement‘s internal property system:

public NameValueCollection Options { get; private set; }

[ConfigurationProperty("type", IsRequired = true)]
// Validation/conversion attributes removed for readability...
public Type Type
{
    get { return this["type"] as Type; }
    set { this["type"] = value; }
}

Even if we force the configuration to save itself, using some overloads of the Save method, we end up with a configuration file that lost its custom attributes, leaving only the provider type attribute, because the configuration doesn’t know about those options.

We can’t turn Options into a ConfigurationProperty, though, because it would mean that we had a nested collection inside our "cookieProvider" XML element. We don’t want that. We want to dynamically add new properties to ConfigurationElement.

Looking at the ConfigurationElement class, we can spot something promising: there’s a virtual property called Properties which, it seems, contains the element’s configuration properties. We can dynamically add items in it:

public class CookieProviderConfigurationElement : ConfigurationElement
{
    // Properties...
    // Constructor...

    protected override bool OnDeserializeUnrecognizedAttribute(string name, string value)
    {
        if (!Properties.Contains(name))
            Properties.Add(new ConfigurationProperty(name, typeof(string), null));
        this[name] = value;
        Options[name] = value;
        return true;
    }

    // ValidateProviderType...
}

Now if we force a configuration save, we keep our custom attributes!

private static void SimulateConfigurationChange()
{
    var configuration = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
    var section = configuration.GetSection("cookieFactory") as CookieFactoryConfigurationSection;
    section.CookieProvider.Options["maxCookies"] = "6";
    configuration.Save(ConfigurationSaveMode.Minimal, true);    // Force a minimal configuration change
}
<configuration>
  <configSections>
    <section name="cookieFactory" type="ConfigurationTest.CookieFactoryConfigurationSection, ConfigurationTest" />
  </configSections>
  <cookieFactory>
    <cookieProvider type="ConfigurationTest.SimpleCookieProvider, ConfigurationTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
      maxCookies="5" flavor="chocolate" />
  </cookieFactory>
  <system.web>
    <trust level="Full" />
    <webControls clientScriptsLocation="/aspnet_client/{0}/{1}/" />
  </system.web>
</configuration>

We get a bit of additional stuff by forcing a save, but that doesn’t matter.

However, we notice that the "maxCookie" parameter is still "5", even though we set it to "6". Well, that’s no surprise given that, once again, we set that value on the local Options property that’s unknown to the configuration API. We have to somehow modify the internal Properties collection when we modify the options.

At this point, I should let the guy in the audience who’s been raising his hand since the beginning ask his question: "what about the ProviderSettings class?". Well, the ProviderSettings is actually almost what we want. It has a NameValueCollection property called Parameters, and if you modify it, you get a correctly modified configuration when you save. However, it has hard coded, required, "name" and "type" configuration properties, which means that if your base provider class has more or less stuff, you’ll have to write your own ConfigurationElement class from scratch anyway (inheriting from ProviderSettings won’t help because you’ll end up with your other "standard" configuration properties in the options collection, which might be confusing).

Besides, isn’t it fun to reinvent the wheel to learn how it works? Let’s resume our investigations, then.

Now we have a few different choices:

  • We could write a ConfigurationOptionCollection class with a similar interface to that of NameValueCollection. This class would be linked to a ConfigurationElement, and would keep its items in sync with the element’s Properties collection. The problem with this approach is that the Properties property is protected, so we would need the ConfigurationElement to also implement an interface or inherit another abstract class that would give our ConfigurationOptionCollection a way to manipulate it. We would also need that interface or abstract class to give a way to prevent illegal actions, like adding an option that has the same name of a regular configuration property (in our case, for example, adding an option called "type", which would collide with the provider type required property). That’s 2 additional classes, and some complexity to keep things in sync. Since most use cases for configuration are either read-only, or read and then write in 2 separate, one-shot, operations, we will rarely need to keep things in sync.
  • We could create an abstract subclass to ConfigurationElement that would keep track of options for us. Every time the configuration properties are accessed, the NameValueCollection for the options is rebuilt. This is how the ProviderSettings works.

Here, I went with a simpler version of the second choice, where I replaced the Options property with a couple of methods. This is purely so there’s minimal code to make it work, and we can see what are the important bits. Introducing the LooseConfigurationElement (yeah, I’m bad with names):

public abstract class LooseConfigurationElement : ConfigurationElement
{
    protected LooseConfigurationElement()
    {
    }

    public void SetOption(string name, string value)
    {
        if (!this.Properties.Contains(name))
        {
            ConfigurationProperty optionProperty = new ConfigurationProperty(name, typeof(String), null);
            this.Properties.Add(optionProperty);
        }
        this[name] = value;
    }

    public NameValueCollection GetOptions()
    {
        NameValueCollection options = new NameValueCollection();
        foreach (ConfigurationProperty property in this.Properties)
        {
            if (IsOptionProperty(property))
            {
                options.Add(property.Name, this[property.Name] as string);
            }
        }
        return options;
    }

    protected override bool OnDeserializeUnrecognizedAttribute(string name, string value)
    {
        SetOption(name, value);
        return true;
    }

    protected abstract bool IsOptionProperty(ConfigurationProperty property);
}

And the refactored CookieProviderConfigurationElement:

public class CookieProviderConfigurationElement : LooseConfigurationElement
{
    private static ConfigurationProperty sTypeProperty =
        new ConfigurationProperty(
            "type",
            typeof(Type),
            null,
            ConfigurationPropertyOptions.IsRequired);

    [ConfigurationProperty("type", IsRequired = true)]
    [TypeConverter(typeof(TypeNameConverter))]
    [CallbackValidator(Type = typeof(CookieProviderConfigurationElement), CallbackMethodName = "ValidateProviderType")]
    public Type Type
    {
        get { return this[sTypeProperty] as Type; }
        set { this[sTypeProperty] = value; }
    }

    public static void ValidateProviderType(object type)
    {
        if (!typeof(ICookieProvider).IsAssignableFrom((Type)type))
        {
            throw new ConfigurationErrorsException("The cookie provider must implement the ICookieProvider interface.");
        }
    }

    protected override bool IsOptionProperty(ConfigurationProperty property)
    {
        if (property == sTypeProperty)
            return false;
        return true;
    }
}

The LooseConfigurationElement is weak in several aspects, like for example the fact that it’s not thread-safe. But it fulfills our requirements, and you can see how we can easily play with which configuration properties are "standard" and which ones are "options" through the IsOptionProperty method. This is a small but useful improvement over the ProviderSettings class, which is only written for the ProviderBase class.

You can improve on the LooseConfigurationElement pretty easily:

  • First, you might want to not rebuild the Options collection every time a client asks for it. There are only 2 ways for this collection to be modified: a client modifies it directly, or somebody adds optional properties to the internal Properties collection. Since this second situation only happens in OnDeserializeUnrecognizedAttribute, we can say that we, in fact, almost never have to rebuild the Options collection!
  • However, it also means that we need to keep the internal Properties collection in sync when people modify the Options collection. Since NameValueCollection doesn’t have any dirty flag, you can either create your own with this feature, use something like ObservableCollection instead, or go down the ProviderSettings route who partially rebuilds the Properties collection every time it is accessed. In this case, we’re almost trading rebuilding one collection for rebuilding another… Also, you’ll have to juggle with options already added as ConfigurationProperties, new ones that are only in name/value form, and those that have been removed. But you’re smart, you can do that.

Well that’s it, we’ve got all the info we need. I hope it was useful for some of you!


RSS feeds and the zen of the newspaper reader

I see a lot of articles on the internet these days about ways to trim down your RSS subscriptions, how to manage your time to read through all your items, etc. My opinion on this is the complete opposite.

I say: subscribe to many RSS feeds. Leave most of them unread. Or set them as read after merely glancing at the article titles.

Most RSS feeds have crappy articles (insert a snappy joke about this one here) or, at least, articles not relevant to you. Most of the RSS feeds you're subscribed to will only have a small fraction of posts that are of any use to you. It's the case even with great sites like LifeHacker or DownloadSquad. It may be because you subscribed to the whole feed, instead of a tag-specific feed, or it maybe just because that's the way things are.

When you read a newspaper, you leave articles unread all the time. You skim through a page and only read the articles that look interesting. There are some pages you know you're not interested in at all, like the astrology and crosswords page, the obituary page, the sports page, the economy page, whatever. These items are effectively kept unread. It doesn't matter. You can just skim through your feed items and read the ones that look interesting, given their title or author. Leave the other ones unread.

What if you miss something interesting or important? Well, get over it. You're missing lots of interesting or important things all the time anyway. In this day and age, you've got to trust your judgement in filtering out information, and you've got to believe that if something is interesting or important enough, it will resurface in several other feeds you're also subscribed to. Hence the need to subscribe to many feeds.

What about the fact that, with technology, we should really have something that's better than the way we used to read newspapers? Well you have a better way already. First, you don't have to go outside to get the newspaper. Second, you don't have to pay for it (well, the price is bundled with your internet access). Third, you could be reading more targeted stuff by filtering your feeds with keywords and search queries. Yahoo Pipes and other similar services can help you with that if that's your thing, and you know you won't be interested in stuff that you don't know you're interested in yet (which is an interesting paradox). But at the end of the day, you still need to filter out some stuff.

This whole thing is really about being okay with leaving lots of unread stuff. Recently, there's been some hype about the "zen mailbox", where people tell you that it's okay to delete email or not reply to it. This is the same philosophy, applied to RSS feeds.

Be zen. Unless you live in New York.


Custom provider attributes in a configuration element (part 1)

A common pattern in .NET is the “provider pattern“, where you have an abstraction for pulling data out of something (a database, a file, your ass, etc.), and one or several implementations of this interface (usually, one for each “something” you can pull data out of).

For this example, we’re going to get cookies (the biscuit, not the browser token, you sad nerd) from some “cookie provider”.

public class Cookie
{
public string Flavor { get; set; }
}
public interface ICookieProvider
{
Cookie ProvideCookie();
}
class Program
{
static void Main(string[] args)
{
ICookieProvider provider = GetCookieProvider();
Cookie cookie = provider.ProvideCookie();
while (cookie != null)
{
Console.WriteLine("Got {0} flavored cookie.", cookie.Flavor);
}
Console.WriteLine("No more cookies.");
}
}

A common way of setting which provider to use is to set this in the configuration file. Let’s create a custom configuration section for this (you need to add System.Configuration as a reference to your project), with a “cookieProvider” element in it.

public class CookieFactoryConfigurationSection : ConfigurationSection
{
[ConfigurationProperty("cookieProvider", IsRequired = true)]
public CookieProviderConfigurationElement CookieProvider
{
get { return this["cookieProvider"] as CookieProviderConfigurationElement; }
set { this["cookieProvider"] = value; }
}
}
public class CookieProviderConfigurationElement : ConfigurationElement
{
[ConfigurationProperty("type", IsRequired = true)]
[TypeConverter(typeof(TypeNameConverter))]
[CallbackValidator(Type = typeof(CookieProviderConfigurationElement), CallbackMethodName = "ValidateProviderType")]
public Type Type
{
get { return this["type"] as Type; }
set { this["type"] = value; }
}
    public static void ValidateProviderType(object type)
{
if (!typeof(ICookieProvider).IsAssignableFrom((Type)type))
{
throw new ConfigurationErrorsException("The cookie provider must implement the ICookieProvider interface.");
}
}
}

We should really use some static ConfigurationProperty members to index our element in the gets/sets, instead of plain strings, but it will do for this example. Also, note the use of the TypeConverter and CallbackValidator attributes, which are pretty neat to let the CLR do all the boring bits to get strongly typed values out of the configuration file. Last, note how we test whether the supplied type implements our interface. This small bit of code has been discussed on other blogs, and I believe this is the best method, until Microsoft decides to add a more straightforward method to System.Type.

Now we can specify the cookie provider in our configuration file:

<configuration>
<configSections>
<section name="cookieFactory" type="ConfigurationTest.CookieFactoryConfigurationSection, ConfigurationTest" />
</configSections>
<cookieFactory>
<cookieProvider type="ConfigurationTest.SimpleCookieProvider, ConfigurationTest" />
</cookieFactory>
</configuration>

And implement the GetCookieProvider method without worrying too much about validating arguments since the configuration API did that for us (it’s a required configuration element):

private static ICookieProvider GetCookieProvider()
{
var section = ConfigurationManager.GetSection("cookieFactory") as CookieFactoryConfigurationSection;
if (section == null)
throw new Exception("No cookie factory found!");
return (ICookieProvider)Activator.CreateInstance(section.CookieProvider.Type);
}

The implementation for the SimpleCookieProvider, used in the configuration file, is, well, simple. It just creates up to 10 cookies.

public class SimpleCookieProvider : ICookieProvider
{
private int mProvidedCookieCount = 0;
#region ICookieProvider Members
public Cookie ProvideCookie()
{
if (mProvidedCookieCount++ > 10)
return null;
return new Cookie() { Flavor = "simple" };
}
#endregion
}

We we run the program, we get the following output:

This is all fine, but what if we want to pass some custom (implementation specific) values to initialize our cookie provider? For example, I’d like to specify the maximum amount of cookies to produce, or the flavor for those cookies. Obviously, we can’t do the following:

public interface ICookieProvider
{
void Initialize(int maxCookies, string flavor);
Cookie ProvideCookie();
}

The Initialize method will get very crowded as we create new providers like, for instance, a provider that will allocate a random number of cookies, picking flavors randomly out of an array of strings.

What would be nice would be to declare “freeform” attributes in the XML of the configuration file as such:

<configuration>
<configSections>
<section name="cookieFactory" type="ConfigurationTest.CookieFactoryConfigurationSection, ConfigurationTest" />
</configSections>
<cookieFactory>
<cookieProvider type="ConfigurationTest.SimpleCookieProvider, ConfigurationTest"
maxCookies="5" flavor="chocolate" />
</cookieFactory>
</configuration>

The options would be passed as a collection of name/value pairs, and each cookie provider would be free to do whatever he wants with those options. This is actually what the ProviderBase class does (from the System.Configuration.Provider namespace):

public abstract class ProviderBase
{
protected ProviderBase();
public virtual string Description { get; }
public virtual string Name { get; }
public virtual void Initialize(string name, NameValueCollection config);
}

It gets initialized with a name and a collection of name/value pairs.

We could replace our interface by an abstract class that inherits from ProviderBase, but that’s just too much overhead for our simple project. Let’s just refactor ICookieProvider and SimpleCookieProvider:

public interface ICookieProvider
{
void Initialize(NameValueCollection options);
Cookie ProvideCookie();
}
public class SimpleCookieProvider : ICookieProvider
{
private int mProvidedCookieCount = 0;
private int mMaxCookies = 10;
private string mFlavor = "simple";
#region ICookieProvider Members
public void Initialize(NameValueCollection options)
{
string maxCookiesSetting = options.Get("maxCookies");
if (maxCookiesSetting != null)
mMaxCookies = int.Parse(maxCookiesSetting);
string flavor = options.Get("flavor");
if (flavor != null)
mFlavor = flavor;
}
public Cookie ProvideCookie()
{
if (mProvidedCookieCount++ > mMaxCookies)
return null;
return new Cookie() { Flavor = mFlavor };
}
#endregion
}

Now when we run the program, we get a ConfigurationErrorsException because some unrecognized attributes were found in the configuration file. Getting them to be ignored and stored is easy enough, with the OnDeserializeUnrecognizedAttribute virtual method of the ConfigurationElement class:

public class CookieProviderConfigurationElement : ConfigurationElement
{
public NameValueCollection Options { get; private set; }
[ConfigurationProperty("type", IsRequired = true)]
[TypeConverter(typeof(TypeNameConverter))]
[CallbackValidator(Type = typeof(CookieProviderConfigurationElement), CallbackMethodName = "ValidateProviderType")]
public Type Type
{
get { return this["type"] as Type; }
set { this["type"] = value; }
}
public CookieProviderConfigurationElement()
{
Options = new NameValueCollection();
}
protected override bool OnDeserializeUnrecognizedAttribute(string name, string value)
{
Options.Add(name, value);
return true;
}
public static void ValidateProviderType(object type)
{
if (((Type)type).GetInterface(typeof(ICookieProvider).Name) == null)
{
throw new ConfigurationErrorsException("The cookie provider must implement the ICookieProvider interface.");
}
}
}

Let’s not forget to actually call the Initialize method:

private static ICookieProvider GetCookieProvider()
{
var section = ConfigurationManager.GetSection("cookieFactory") as CookieFactoryConfigurationSection;
if (section == null)
throw new Exception("No cookie factory found!");
var provider = (ICookieProvider)Activator.CreateInstance(section.CookieProvider.Type);
provider.Initialize(section.CookieProvider.Options);
return provider;
}

Now when we run the program again, we have the expected output:

This is all fine and dandy for simple programs and simple requirements, but it doesn’t work for more elaborate scenarii (yes, I’m the kind of guy that says “scenarii” instead of “scenarios”). For example, what if we had an “Options” dialog that allowed the user to modify the configuration of the program?

Let’s simulate this by hard-coding a change in the configuration:

private static void SimulateConfigurationChange()
{
var configuration = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
var section = configuration.GetSection("cookieFactory") as CookieFactoryConfigurationSection;
section.CookieProvider.Options["maxCookies"] = "6";
configuration.Save();
}

This method is called at the end of the program. When we run it, the configuration file isn’t changed.

What’s wrong? We’ll get on that in part 2 of this series!


CreateProperty task in a skipped target

In MSBuild, you can specify inputs and outputs for target so that the target is skipped when the outputs are more recent than the inputs. However, you get a strange behaviour when you put a <CreateProperty> task inside such a target.

Let’s look at the following MSBuild project:

<Project DefaultTargets="DoIt" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

<PropertyGroup> <WhyIsThisCreated>everything is okay</WhyIsThisCreated> </PropertyGroup>

<Target Name=”DoIt” DependsOnTargets=”CreateTestFiles; DoStuffMaybe”> <Message Text=”I say: $(WhyIsThisCreated)” /> </Target>

<Target Name=”CreateTestFiles”> <WriteLinesToFile Lines=”input” File=”input.txt” Overwrite=”true” /> <WriteLinesToFile Lines=”output” File=”output.txt” Overwrite=”true” /> </Target>

<Target Name=”DoStuffMaybe” Inputs=”input.txt” Outputs=”output.txt”> <Error Text=”We shouldn’t be there!” /> <CreateProperty Value=”oh my” Condition=””> <Output TaskParameter=”Value” PropertyName=”WhyIsThisCreated”/> </CreateProperty> </Target>

</Project>

This project creates two files, “input.txt” and “output.txt”. Then, the “DoStuffMaybe” target is tentatively run, but is skipped because “output.txt” is newer. We make super sure that this target won’t be run by adding an <Error> task in there.

But when you run this project, you get the following output:

Somehow, the <CreateProperty> task was processed, and the value of $(WhyIsThisCreated) was set to “oh my“!

I don’t really know what’s going on, but the solution to this problem is, as it is the case most of the time, to read the documentation. The <CreateProperty> page on MSDN informs us that another property is available on the task:

ValueSetByTask

Optional String output parameter.

Contains the same value as the Value parameter. Use this parameter only when you want to avoid having the output property set by MSBuild when it skips the enclosing target because the outputs are up-to-date.

If we replace TaskParameter=”Value” by TaskParameter=”ValueSetByTask” in the MSBuild project, we finally get the expected result:

This RTFM saying never gets old, does it?


Regain control over build configurations in Visual Studio Express

I do almost all my development at home with Visual Studio Express. These products are wonderful and free. Well… granted, if they were not free, they would also be less wonderful too, probably. But they’re still great pieces of software, and I’m pretty sure they played a critical part in building the vibrant .NET community we have now. I wish Microsoft would also release a free “Express” version of Office, but I guess I can keep on dreaming for a while longer.

The Express versions of Visual Studio have of course less features than their “professional” counterparts. Some features, however, are present, but are just disabled by default. This is the case for the build configurations.

By default, you only get a couple of disabled combo-boxes, and you only know what’s supposed to be there if you’ve already worked with Visual Studio Professional. Visual Studio Express will switch between Debug and Release versions of your project depending on what you’re doing. For example, if you start the debugger by pressing F5, the “play” button, or choosing “Debug > Start Debugging“, it will switch to the Debug configuration. If you start the program without debugging by pressing Ctrl+F5 or choosing “Debug > Start Without Debugging“, it will switch to the Release configuration.

This is fine, but can be annoying at times, especially when you have inputs or outputs. For example, if you write a log file in the same directory as the executable, you’ll have to remember to open the correct one depending on whether you pressed F5 or Ctrl+F5. This can lead to situations where you’re looking for an error when there’s nothing wrong because you’re not looking in the correct directory!

Also, you might want more configurations than just Debug and Release. You might want completely different configurations. If you’ve worked with Visual Studio Professional, you may even be wondering what happened to the Configuration Manager in the Express versions.

To fix this, choose “Tools > Options“, and then check “Show all settings” on the bottom left corner of the dialog. You’ll get all the advanced options, including the “Projects and Solutions” category. Check the option called “Show advanced build configurations“.

Click OK, and look at those beautifully enabled combo-boxes!

Now you can set the current build configuration, and whether you start your project through the debugger or not won’t change the executable being launched.

Also, the Configuration Manager is back, and will let you create new configurations:

You will soon find that other parts of the UI have now more options related to build configurations, such as the Project Properties interface.


One more on the cloud

First, that's right, I said "cloud". In this day and age, you have to keep up with all the hype terms and buzzwords, otherwise you sound like you're from the 20th century.

Second, yes, here's anther blog about programming (mostly in .NET), even though there are thousands of those already. But Jeremy Miller, over at CodeBetter, posted about how I should really blog (well, not me specifically, but since I was reading, I'm pretty sure he was talking to me too):

I've been asked several times over the past month "Should I start a
blog?  What if…?"  The answer is yes, you should.  Or more
accurately, if you're interested in blogging you shouldn't feel afraid
to blog.

He then goes on and gives some pretty sensible "Good", "Bad" and "Ugly" points about starting a blog, along with demystifying the most common excuses for not starting a blog… And guess what? Well, too bad now, here's mine!

Now, however, I made Jeff Atwood sad and angry, as I've committed the sin of meta-blogging, but I guess you get a free pass for the first post, what with the need to introduce yourself and all.

Well, enough ranting for now. I hope to see you again later!