The Stochastic Game

Ramblings of General Geekery

Using Twitter as a feed reader

Robert Scoble famously posted about ditching Google Reader for Twitter a bit more than a year ago, and ever since I’ve been baffled at people moving to Twitter or Facebook to read their news. Business Insider even said a few months ago that Twitter has killed RSS readers. I’ve always been wondering how Twitter would be any better than RSS… that is, until I’ve actually tried it.

Now I’m just thinking those people are crazy out of their minds.

First, let’s put Scoble aside. This guy is clearly not your average user, he’s really out there. I mean, his performance concerns with Google Reader are completely legitimate, but he must be the only guy on earth that actually follows more than 20,000 people… and still uses the term “friends” to define them, by the way. And I’m not even talking about the feasibility of keeping up with that much information — after all, if he’s clever, he’s using tools to consume it all and make the trending subjects bubble up to the surface, or maybe he does it the other way around by searching for topics he’s interesting in right now instead of reading the timeline in a linear fashion. But average people (or, well, at least me) are following real friends (either their blogs, or the links/videos/pictures they share), a few websites pertaining to their interests that may not be related to web technology (knitting? cat breeding?), and other things that can’t be processed by some “trending” tool.

Anyway, I used Twitter instead of Google Reader for a couple months, and here’s what I found.

User Experience

The first thing that bugs me about Twitter, and all of its client applications, is that it effectively has a worse user experience than even the crappiest of RSS feed reader.

Remember why RSS feeds were supposed to improve our productivity, making it easier to get to the information that mattered? They were supposed to do this by bringing the information to us instead of forcing us to go to the information. RSS feeds would deliver the content, and only the content, right there in front of us, and they pretty much delivered on that promise. Sure, some websites only publish an abstract of the article, and you need to go to that website to read the rest, but that’s the worst case scenario. The best case scenario is that you read the whole thing in a distraction free environment.

Now what about Twitter? The best case scenario is the title of the article, with a link to the website. Yep. As far as I’m concerned, this is a step backwards in terms of productivity: I read some entry on Twitter, click a shortened URL, wait for some HTTP redirections, and then end up on the full website with animated ads and other distractions. Compare that to reading the article right now with only the content in front of me. And I’m not even talking about the worst case scenario, which is a shortened title of the article with a link, or some obscure smart-ass comment with a link.

Then there’s the problem of knowing what you’ve already read. All RSS clients track the items you’ve read. Most even synchronize that data over the cloud. Compare that to Twitter which doesn’t remember anything and for which I always have to look out for that point in the timeline where I’ve caught up with all the new stuff. And that’s for a naive, linear approach. Most of the time, I’m going to read stuff in a non-linear way, either by using lists or filtering by user (on Twitter) or filtering by tag or website (on Google Reader), so the stuff I’ve already read could end up mixed up with the stuff I haven’t read yet.

Q&A

There’s one thing I keep reading about Twitter being awesome, and that’s its ability to give you answers super quickly. Supposedly, you can tweet a question and get replies within a few minutes, if not seconds.

Well, that’s the biggest load of crap I’ve ever heard.

First, this obviously works only if you’ve got a sizeable following. A year and a half ago, the average Twitter user had 127 followers. It’s hard to tell how that number was calculated, but let’s assume it did take into account statistical distortions from the few internet celebrities having insane numbers of followers on one end, and all the inactive users (supposedly making up 3/4 of the whole user base) on the other end. Either way, that number is just big enough for a user to get a few answers a day. To really get to the point where you get better answers faster than your search engine of choice, I’m guessing you need to hit the 500 followers mark, maybe even 1000. I wouldn’t know, I’m far from being that popular… And even then, what? Twitter gets better for you as you get more people to follow you? Ok… what else sounds like this? Yeah: pyramid schemes. And that’s known to be non sustainable. There will be a lots of users down there with 20 followers trying to get some attention.

Second, what happened to all that concern about privacy? People start complaining the second Google introduces their Web Search History, but on the other hand it’s OK to search for information by posting stuff publicly on Twitter, which is then indexed a hundred times more?

Technical Stuff

Then there’s the problem of infrastructure. This part has a lot more to do with personal opinions about stuff you may not care about, but I always prefer a more open, distributed design over a more closed, constrained one. For example, if I can sign into a website using either my Facebook or Google account or OpenID, I’ll take OpenID because it’s obviously more open and doesn’t rely on the existence of a single company. Sure, unless you’re running your own OpenID identity server, you’re relying on some third party provider, but you can abstract that from OpenID consumers, and then switch the underlying implementation anytime you want. To pick a more mainstream example, between using a Hotmail or Gmail address and a custom domain address, I’ll choose the custom domain address for similar reasons.

Ok, so this is on the same level as people using open-source software vs .people who don’t care, or people eating organic food vs. people who also don’t care. And in this case, you probably don’t care, and that’s fine, but I personally dislike how Twitter and other similar services create this “siloization” effect on information and data on the web.

On Using Twitter

All of this makes Twitter a pretty horrible experience as a feed reader, at least for me. Does that mean it sucks? Well, kinda, but not necessarily. First, some of the new features indicate how the design team is trying to reduce my first productivity problem: if a link in a tweet points to a picture or a video from a known media service, said picture or video will be displayed in the sidebar right away. When you couple this with the new keyboard shortcuts and the lists, you have an experience that’s somewhat close to that of Google Reader, at least for some of the tweets. I can easily imagine that it’s going to get closer and closer in the near future… but right now it’s still lagging far behind IMHO.

For now I’m back to using Twitter the way it’s effective for me: following actual people’s status updates and link shares. Twitter is wonderful in the way it lets you get closer to what experts and other key people in society have to say on an everyday basis (you wouldn’t believe how “I had pasta today” inspires you when it’s been tweeted by a Nobel prize winner…). Arguably, that’s actually what Twitter is all about: information through people. RSS on the other hand is only for delivering content (although Google is trying to add social layers on top of it). The two may feel similar, but not quite.

Twitter is also a useful alternative search engine in some situations, but more and more search engines actually include tweets as search results (in Google, you need to click on the “Realtime” result type on the left).

But all things being equal, if I get the choice between following the same thing on Twitter or via RSS, I’ll choose RSS for sure.


Making WPF controls double-clickable

A common UI pattern features the ability to double-click on a control to go in “edit” mode. So for example, you have a TextBlock that shows you the name of an object, and you can double-click it to change it into a TextBox where you can edit that name. At this point, it’s easy to hook up the MouseDoubleClick event, or some other mouse event, but that’s not very MVVM-like, is it?

Thankfully, WPF gives us the awesome attached dependency property API, which we already used for adding auto-complete to TextBoxes. The goal is to be able to write something like this:

<TextBlock Text="{Binding Name}" mi:ExtendedCommands.DoubleClickCommand="{Binding EditNameCommand}" mi:ExtendedCommands.DoubleClickCommandParameter="42" />

When the TextBlock would get double-clicked, the “EditNameCommandcommand object on the DataContext would get called with “42” as the parameter. This is super easy to do:

public static class ExtendedCommands
{
    public static readonly DependencyProperty DoubleClickCommandProperty;
    public static readonly DependencyProperty DoubleClickCommandParameterProperty;

    static ExtendedCommands()
    {
        DoubleClickCommandProperty = DependencyProperty.RegisterAttached("DoubleClickCommand", typeof(ICommand), typeof(ExtendedCommands), new UIPropertyMetadata(null, OnDoubleClickCommandPropertyChanged));
        DoubleClickCommandParameterProperty = DependencyProperty.RegisterAttached("DoubleClickCommandParameter", typeof(object), typeof(ExtendedCommands), new UIPropertyMetadata(null));
    }

    public static ICommand GetDoubleClickCommand(DependencyObject obj)
    {
        return (ICommand)obj.GetValue(DoubleClickCommandProperty);
    }

    public static void SetDoubleClickCommand(DependencyObject obj, ICommand value)
    {
        obj.SetValue(DoubleClickCommandProperty, value);
    }

    public static object GetDoubleClickCommandParameter(DependencyObject obj)
    {
        return (object)obj.GetValue(DoubleClickCommandParameterProperty);
    }

    public static void SetDoubleClickCommandParameter(DependencyObject obj, object value)
    {
        obj.SetValue(DoubleClickCommandParameterProperty, value);
    }

    private static void OnDoubleClickCommandPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
    {
        var element = d as UIElement;
        if (element != null)
        {
            if (e.OldValue == null && e.NewValue != null)
            {
                element.MouseDown += new MouseButtonEventHandler(Control_MouseDown);
            }
            else if (e.OldValue != null && e.NewValue == null)
            {
                element.MouseDown -= new MouseButtonEventHandler(Control_MouseDown);
            }
        }
    }

    private static void Control_MouseDown(object sender, MouseButtonEventArgs e)
    {
        if (e.ClickCount == 2)
        {
            var element = sender as UIElement;
            if (element != null)
            {
                var command = GetDoubleClickCommand(element);
                var parameter = GetDoubleClickCommandParameter(element);
                if (command != null && command.CanExecute(parameter))
                {
                    e.Handled = true;
                    command.Execute(parameter);
                }
            }
        }
    }
}

This code is pretty naive, but it gets the job done and illustrates quickly how to do it: just register 2 attached dependency properties in a new custom class, and whenever someone uses it on a UI element, start listening to the “MouseDown” event on that element and check against the “ClickCount” property to figure out if it’s a double-click. If it is indeed a double-click, just run the command that was specified for this control!

Obviously, you’ll need to handle error cases a bit better (like for instance if these properties are used on a non-UIElement), make it a bit better (for example register the MouseDoubleClick event instead of the MouseDown event if the target object is a Control), and other stuff like that. But, well, by now, you should know not to copy/paste code found on blogs without using your brain anyway.


The harpsichord and the clavichord!

Google recently released Google Scribe, an experiment on auto-completing words for the user in advance, instead of spell and grammar checking what he’s already written. Go try it for yourself.

I myself had some fun letting Google write stuff for me. Just start with an arbitrary word, and keep accepting whatever the first suggestion is. The UI slows down quite a lot after a dozen or so auto-completions, so sometimes you need to delete the last space and type it again to get some performance back.

For example, here’s what I got after seeding their algorithm with “I’m”:

I’m not sure if it is not a valid stream resource in C minor for Piano and Orchestra No. 1 in the world of the living room and dining room with a view to share videos with friends and family to enjoy a good meal and a drink for the first time in the future and the future of the world and the world of the living room and dining room with a view to share videos with friends and family to enjoy a good meal and a drink for the first time in the future and the future of the world and the world of the living room and dining room.

Obviously, the algorithm quickly loops like crazy, and the sentence doesn’t quite make sense, but it’s pretty funny (at least to me!). Now, here’s what I got when starting with “You”:

You can turn Autoshare off at any time and without notice to you and your family to enjoy a good meal and a drink for the first time in the future and the future of the world and the world of the living room and dining room with a view to share videos with friends and family to enjoy a good meal and a drink for the first time in the future and the future of the world and the world of the living room and dining room.

That’s pretty interesting, as the looping is very similar to the first one!

Now, here’s what I got when starting with “let’s”:

Let’s see if we can not guarantee each coupon ‘s authentication and authorization data between security domains and do not want to be related to the search result can be bookmarked to maintain the filter for future use at Recent Searches for neutral Higgs bosons in e minor for clavichord and the harpsichord and the clavichord and the harpsichord and the clavichord and the harpsichord and the clavichord and the harpsichord.

This time, the loop is short and kinda catchy. The harpsichord and the clavichord! The harpsichord and the clavichord! The harpsichord and the clavichord! Maybe that’s gonna be the next internet meme?


Facebook’s privacy issues

Everybody knows, or at least says without really knowing, that Facebook has a few privacy issues, but there’s one thing I never quite realized until recently…

It all started with my friend Bertrand opening his new personal blog. He wanted a dedicated place to post personal stuff, which he previously posted alternatively on his professional blog or on Facebook. I’m pretty sure he also wanted an excuse to play around with his new cool baby, Orchard… Anyway. In order not to start completely from scratch, he imported those previous articles he wrote, along with their comments, to the new blog.

I immediately sent him an email to tell him he could work at Google Buzz, seeing how he just disclosed Facebook comments to the public… but as I was writing it, I wondered… were his notes already public in the first place? You see, Facebook gives you a lot of control over your content’s visibility. You can specify exactly who gets access to what, who gets to comment on what, etc. The problem is that, although the content creator knows what the privacy settings are, the content consumers don’t. Being on Facebook, I just assumed that those notes were, at best, visible to friends of friends… but of course, it wasn’t the case. Bertrand’s notes were public all this time, and as I commented on them, I unknowingly posted completely public stuff.

None of those comments being public bothers me, mind you. But it’s just annoying how, when you participate in your friends’ online social life, you don’t really know what kind of privacy level you’re dealing with. Obviously, Facebook should add some kind of indicator whenever you post a comment, telling you who will be able to see it. I’m surprised this hasn’t been such a big deal so far. Maybe people are too busy with “Company XYZ is evil!!”-type generalities to bother with actual details.


Xaml serialization quirks and gotchas

I recently had to build a little tool that would read its configuration from a XAML file (because XAML serialization is, most of the time, better and more customizable than standard XML serialization). The trick was that this tool had to be built on top of .NET 3.0 – not 3.5 or 4.0. And I discovered that there are a few little gotchas in .NET 3.0’s XAML serializer that I, somehow, never ran into before.

Gotcha #1: Automatic properties

Let’s say I wanted to deserialize the following class:

   1:      public class Foo
   2:      {
   3:          [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)]
   4:          public IList<int> Values { get; private set; }
   5:  
   6:          public Foo()
   7:          {
   8:              Values = new List<int>();
   9:          }
  10:      }

Pretty straightforward… however, if I try to deserialize it from a XAML file, I get the following exception:

property ‘Foo.Values’ cannot be set because it does not have an accessible set accessor. Line ‘3’ Position ‘4’.

I thought the framework was high on drugs or something, as what I wrote was pretty much the main use-case for the DesignerSerializationVisibility attribute. But then I remembered this other little gotcha with property reflection: when you use the auto-property syntax from C# 3.0 for a read-only property, the PropertyInfo you get on that property says you can write to it. This is because there is a setter method – only it’s private. And that’s why the XAML serializer complains about accessibility.

Note that this has been fixed with the new XAML serialization in .NET 4.0, with the new System.Xaml namespace. But if you need to target a previous version of the framework, you’ll need to refactor your code like so:

   1:      public class Foo
   2:      {
   3:          private List<int> mValues = new List<int>();
   4:  
   5:          [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)]
   6:          public IList<int> Values
   7:          {
   8:              get { return mValues; }
   9:          }
  10:          public Foo()
  11:          {
  12:          }
  13:      }

Gotcha #2: Public interface

Now you may think that the XAML serializer would be happy, but no… this is a very picky serializer! The next exception you get is:

‘Values’ is a read-only IEnumerable property, so ‘ConsoleApplication2.Foo’ must implement IAddChild. Line ‘3’ Position ‘4’.

That’s obviously weird, because the property is actually a list. But my guess is that the serializer is only looking for the ICollection interface. And the problem is that IList<T> does not implement that interface – it implements ICollection<T>, sure, but neither of those generic interfaces implement the “legacy” interfaces. That why most of the time you’re better off writing a proper collection class that inherits from Collection<T>, or some other base class from the System.Collections.ObjectModel namespace, because they implement both the generic and “legacy” interfaces… but sometimes you may not feel like writing an “IntCollection” class, right? Well. Life sucks.

Oh, and don’t bother actually implementing the IAddChild interface, or making the list itself implement that interface, it’s useless, you’ll get the same exception. That interface is some kind of weird leftover from the beta phase of WPF, and is now half-deprecated, half-internalized, as mentioned on that MSDN forums thread.

So basically, in our case, a quick fix would be changing the public interface to a List<int>.

After those few changes, the XAML serializer is happy and running fine!

Hope this helps.


Spam your friends with Yahoo! Pipes

You know how it goes: you’re an internet hipster with blogs and Twitter feeds and all that kind of new age stuff, but only other internet hipsters read them. Your friends (at least the ones that are not internet hipsters) only stick to Facebook. So how can you bring your stuff to them?

At first, it seems easy: Facebook can pull “stories” from various websites and services. Go to your profile, and under the status update box, click “Options” and then “Settings”. You get the following interface:

That’s cool for most people. You bring in your Flickr photos, and your blog articles, and maybe a few other things… but what happens if you’ve got more than one blog? The “Blog/RSS” site can be only chosen once. Also, notice how you can choose Google Reader as a story source. Sure, this works, but the way your shared items appear on your wall is not optimal: they appear as “secondary activities”, presented like your comments or likes, sometimes merging several ones together which adds extra clicks to get to the actual article (e.g: “Ludovic shared 2 new items on Google Reader”, with only a link to your shared items page, which means you don’t know which 2 items were shared, as they may not be the last 2 ones if there have been other, newer ones since then). At least it was like that a few months ago (maybe they fixed it in the meantime, you tell me). That’s not proper spamming, so let’s do it the power user way.

A few years ago, Yahoo launched Yahoo! Pipes, a website that lets you build feed mashups. I decided to use that to aggregate all my stuff and post it on my Facebook wall.

It’s pretty simple, at first. You just go to the Yahoo! Pipes website, log-in, click “Create a pipe”, and add one source node for each site you want to pull data from. You’ll most probably use “Fetch Feed” nodes with the direct URL to your blogs and shared items RSS or Atom feeds, but you can also use some “Fetch Site Feed” nodes, too (they will find the actual feed URLs like any feed reader would do). Now pick the “Union” node from the “Operators” category, and plug all your sources into it. Plug the “Union” output into the “Pipe Output” node that should have been created by default for you. Voilà, you have an aggregated feed that you can import on your wall on Facebook!

Or do you?

One problem you’ll notice right away is that all the items from the first feed are displayed, and then all the items from the second feed, and so on… The “Union” node only built a, well, union of those feeds. You need to re-order them by date so that all the items from all the sources are correctly mixed together. For this, add a “Sort” node, as shown above, that will sort items by “item.pubDate”.

There. Fixed? Nope… not yet.

Now you have a user experience problem. All those items appear as “notes” on your Facebook wall, with their contents directly readable. If you’re pulling feeds from your own websites and feeds from elsewhere at the same time (e.g. your Google Reader’s shared items feed), it becomes difficult for your friends to figure out which stuff was written by you, and which stuff is just cool shit you found on the web and wanted to share. You need to create a differentiator, like for example prepend “Shared on Google Reader:” in front of each of your shared items’ titles.

I’m still evaluating different options but at the moment I’m using something a bit less user-friendly, although definitely more friendly to the websites from which I share stuff from: I completely replace the contents of the item with a link to the actual article on its original website. This means that people can’t read the item right there (they need to click on the link), but it also means the people who posted the cool shit in the first place get a visitor that will potentially start clicking around on links and ads if he linked whatever I shared.

For this I created a different pipe, although I could have hard-coded it in the first one.

This pipe basically gets my Google Reader shared items feed and processes it in a loop: for each item, I replace the contents with a link whose text is just the title of the item. Inject this new pipe into the first one (you can reference a pipe inside another pipe) and, at last, you’re done!

The only problem I’ve had so far is that, after adding my aggregated feed to Facebook for the first time, the mobile version of the website did something funky. Instead of weaving all those notes into my friends’ timelines, moving the older items behind the more recent updates from other people and applications, it put everything it could at the top. So basically, I really spammed all my friends with dozens and dozens of items that were several weeks or even several months old. This bug didn’t affect the normal website, neither did it affect the iPhone application, as far as I could tell, so I only got a couple complaints. And hopefully they fixed the bug since then.

That’s pretty much it. You can find the 2 pipes I showed here and here.


About unit testing Maya and MStatus macros

Programmers in the video games and movies industry rarely write unit tests for all kinds of reasons and excuses, but every now and then, it happens. And it can get a bit complicated when you want to test a plug-in hosted by a 3rd party application like Autodesk’s Maya.

Setting up the unit test project

The good thing is, unlike most other 3d modeling packages, Maya comes with built in “batch” and “library” modes. The batch mode effectively runs Maya as a command line program, and the library mode allows you to host most of the Maya core engine inside your own application. This means that, as long as you’re not using anything that depends on the existence of the UI, it’s possible to run automated tests.

Once you’ve created your unit test project using your favourite test framework (lately I’ve been using boost::test), you want to initialize Maya in a minimalist environment (most probably in a test fixture setup phase). You can do that by pointing the MAYA_APP_DIR environment variable to a custom folder of your choice, which would contain a custom Maya.env file, along with maybe a few custom startup scripts. The goal is obviously to not load your complete production environment with all your 3rd party plug-ins. You also probably want to set the MAYA_PLUG_IN_PATH and MAYA_SCRIPT_PATH variables to point to the output or deployment folder(s) of your test project. This can even be done programmatically with the Maya API using MCommonSystemUtils::putEnv, as long as it happens before you initialize the engine.

When you’ve got the correct environment, you can call MLibrary::initialize and wait for an excruciatingly long time (somewhere between 20 and 30 seconds). Ideally, I’d like to start with a fresh Maya engine for each test suite (or even each test!), but given how long it takes for the library to initialize, that’s not an option and I do it only once on start-up. Between tests, I merely clear the scene with a “file -force –new” MEL command. I know you can severely strip down Maya by telling it to not even load the default plug-ins (e.g. if you’re testing something that doesn’t need rendering and animation and skinning and all that, you can in theory not load those features), but I haven’t bothered looking into that yet. If you have, I’d love to hear about it.

Anyway, all this, along with some nice helper functions for checking things in the scene, is pretty much enough to get you going… until you write your first “fail” test, i.e. a test that ensures that a given situation returns the expected error code or throws the expected exception.

The MStatus problem

You see, the problem is that, if you’re a bit paranoid like me, you’re probably using a lot of CHECK_MSTATUS macros around your code to make sure you’ll notice early if anything is not going as expected (the Maya API, being old as it is, doesn’t use exceptions, so if you don’t check return codes regularly you could go a long way with invalid objects, which increases the chances it’ll do some harm to your users’ data). When these macros are passed a non successful MStatus, they print a message to the standard error output. This pretty much means you’ll see that message printed in the terminal window as your unit test runs. That’s fine, but in the case of your fail test you don’t want that to happen because you only want to see unexpected error messages printed in the console.

Looking at the code for those CHECK_MSTATUS macros, one can see that they use the STL’s std::cerr to print the beginning of the message, and then call MStatus::perror to print a user-friendly description of the error. Being naive, I’m thinking “good, I just need to temporarily capture std::cerr!”. So I wrote the little following structure:

struct StreamCapture
{
public:
    StreamCapture(std::ostream& str, const std::ostream& replacement);
    ~StreamCapture();

private:
    std::ostream& mStr;
    const std::ostream& mReplacement;
    std::streambuf* mPrev;
};

StreamCapture::StreamCapture(std::ostream& str, const std::ostream& replacement) :
    mStr(str),
    mReplacement(replacement)
{
    mStr.flush();
    mPrev = mStr.rdbuf();
    mStr.rdbuf(mReplacement.rdbuf());
}

StreamCapture::~StreamCapture()
{
    mStr.flush();
    mStr.rdbuf(mPrev);
}
#define CAPTURE_COUT()
    std::stringstream replace_cout_##__LINE__ ;
    StreamCapture capture_cout_##__LINE__ (std::cout, replace_cout_##__LINE__)

#define CAPTURE_CERR()
    std::stringstream replace_cerr_##__LINE__ ;
    StreamCapture capture_cerr_##__LINE__ (std::cerr, replace_cerr_##__LINE__)

#define CAPTURE_ALL()
    CAPTURE_COUT();
    CAPTURE_CERR()

It’s using the classic RAII pattern, and you can use it as such:

BOOST_AUTO_TEST_CASE(TestThatSomethingFailingDoesntScrewUpTheRest)
{
    // Do something
    initializeStuff();
    {
        CAPTURE_ALL();
        callSomethingThatWillFail();
    }
    // Check that even with the failure, the data in the scene is still ok.
    checkStuff();
}

It won’t take you long to realize that only the first half of any error message printed by CHECK_MSTATUS has been captured. “No worries”, I thought. “They probably print the other half with a good ol’ fprintf, so I’ll capture stderr too!”.

Well… I tried a few different things, from a few different sources, but none of them worked. At best, it didn’t do anything – the error code description would still get printed. At worst, it would seemingly work but would then render the whole program unstable (most of the time it would crash on the second attempt to capture the error output). I don’t know what Maya is using to print that MStatus out, but it sure isn’t a straightforward fprintf.

Now, before moving on to my (somewhat disappointing) solution, let’s look at another MStatus problem.

The CHECK_MSTATUS_AND_RETURN_IT problem

One of the CHECK_MSTATUS macros is CHECK_MSTATUS_AND_RETURN_IT, which checks for any error and, if found, will make the current function return that very same error. A typical use of it would be:

MStatus doSomethingSimple();

MStatus doSomethingMoreComplicated() { // Do stuff CHECK_MSTATUS_AND_RETURN_IT(doSomethingSimple()); // Do more stuff return MStatus::kSuccess; }

The problem is that this macro is implemented as such:

#define CHECK_MSTATUS_AND_RETURN_IT(_status)
    CHECK_MSTATUS_AND_RETURN((_status), (_status))

Do you see? It calls CHECK_MSTATUS_AND_RETURN which, if the first status is an error, returns the second status. This means that if the call to doSomethingSimple fails, the macro will call it a second time to get the return value!

This is obviously bad… (and another example of why C++ macros can be evil because it’s not obvious what you can and cannot pass into them).

At first I defined a CHECK_MSTATUS_AND_RETURN_IT_SAFE macro that did the correct thing, but for various reasons I decided to just redefine the original macro and prevent other programmers from making that mistake (at least as long as the my header file was included, which is easier to enforce, especially if you’re using things like pre-compiled headers):

#pragma warning(push)
#pragma warning(disable:4005)
#define CHECK_MSTATUS_AND_RETURN_IT(status)
    CHECK_MSTATUS_AND_RETURN_IT_SAFE(status);
#pragma warning(pop)

Back to the MStatus problem

Now what does this have to do with the error message problem?

Well, now that I was already redefining one of the CHECK_MSTATUS macros, I figured it wasn’t much more trouble to redefine them all (there’s only a handful of them). The only thing you need to do is replace the call to MStatus::perror with a line that sends the MStatus into std::cerr. I mean, MStatus already has operator<< defined for IO streams, so it’s not like I had to do anything more than some copy/paste and changing one line.

So there you have it: my crappy solution for having clean Maya unit tests was to redefine the CHECK_MSTATUS macros so they use the STL’s IO streams. Do you have a better or more elegant solution? I’d love to know about it!


Writing a custom Main() method for WPF applications

Creating a new WPF project in Visual Studio gives you the following pretty simple application markup and code:

<Application x:Class="WpfApplication2.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    StartupUri="Window1.xaml">
</Application>
namespace WpfApplication2
{
    public partial class App : Application
    {
    }
}

Understanding how it really works, and how to supply your own custom Main() method, is just a search query away. You basically need to change the application’s build action from “Application Definition” to “Page”, create a constructor that calls “InitializeComponent”, and write your Main() by eventually calling one of the application’s “Run” method overloads.

namespace WpfApplication1
{
    public partial class App : Application
    {
        App()
        {
            InitializeComponent();
        }

        [STAThread]
        static void Main()
        {
            Window1 window = new Window1();
            App app = new App();
            app.Run(window);
        }
    }
}

Don’t forget, also, to remove the “StartupUri” from the App.xaml, otherwise another copy of window will show up (unless you get an error because the URI points to a non existing XAML resource).

What most articles won’t tell you, though (some of them actually getting it wrong), is that it’s important that you create the application before you create your first window. To illustrate this, let’s add an application resource to App.xaml:

<Application 
    x:Class="WpfApplication1.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
    <Application.Resources>
        <Style TargetType="Button">
            <Setter Property="Background" Value="Red" />
        </Style>
    </Application.Resources>
</Application>

It’s a style that makes buttons red. Since it’s defined at application level, all buttons in all the windows should be red (except those that define their own local style override). For example, here’s the markup for my Window1:

<Window x:Class="WpfApplication1.Window1"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    Title="Window1" Height="300" Width="300">
    <Grid>
        <Button>Is This Red?</Button>
    </Grid>
</Window>

We should see a window with a red button in it. But when I run the project, I get this:

Well… it’s not red.

The issue is that the window is created before the application. This means that when it queries the current application to get the globals resources, it finds nothing. What you need to do is simply create the application first:

[STAThread]
static void Main()
{
    App app = new App();
    Window1 window = new Window1();
    app.Run(window);
}

By simply switching those two lines, you get the expected result:

This drove me crazy for an hour or so – I thought there was something funky going on with my theme management code or something. Hopefully this will save someone some pain.


Consolidate instant messaging accounts into your Gmail

Everybody knows that Gmail is great for consolidating multiple email accounts into one place that’s easy to search, organize, backup, and get out of. What less people know is that it’s also a great place to consolidate your instant messenger accounts, too!

Watch out, this article is pretty long and gets quite nerdy at the end.

Some background information (and a few rants)

We’re going to talk about merging accounts from different instant messaging services (Gtalk, MSN, ICQ, etc.) so let’s get this out of the door first: yes, I could use programs like Pidgin or Trillian to log into all those networks simultaneously, but I’d still have to search in at least two places when it comes to past communications, and that’s without considering problems like chat history being only stored locally on one computer, which means I then have to sync that with my other computers using Dropbox or Mesh. Also, it’s really way to simple for my tastes. There’s a better, more complicated and geek-fulfilling way.

Google made the very good decision of using Jabber, a.k.a. XMPP, an open protocol, to implement their own instant messaging system Gtalk. As usual with Google, though, they didn’t quite follow the standard entirely but it’s compatible enough for what I need… mostly. The other good thing with Google is that they integrated the whole thing into Gmail so that chats are searchable along with emails, which is what I’m after, here. Some people may be uncomfortable with the privacy implications, but those people probably don’t use Google services anyway (why would you trust them with your emails or videos or pictures but not chats?). In fact, people worried about privacy probably don’t use many web services in general, unless they’re one of those weirdoes who actually read the whole terms of services and really compare them (I don’t even know if such weirdoes exist). Besides, when you start worrying about privacy, you generally end up setting up your own email server, which then makes you worry about other things like backup, whitelisting/greylisting, encryption, etc… Anyway.

So what then? Well, the XMPP protocol has things called “transports” who basically translate to and from other IM networks like MSN, Yahoo and others. That’s the way we’ll consolidate all our IM networks into Gmail!

There are a few tutorials out there that explain how to set that up, so I’ll quickly walk through the first steps and then get to what I did differently, which is the good part.

Setting up Psi IM

Go and get Psi, a cross-platform instant-messenger client that has one of the most complete feature sets out there when it comes to Jabber magic.

Create an account in Psi that will connect to Gtalk. You can follow one of the previously linked tutorials for that, or Google’s own instructions. Apart from the login/password, it mostly comes down to the following settings:

If you’re using Google Apps, you’ll also have to add a few records to your DNS zone file. The instructions are provided by Google (you obviously need to replace “example.org” with your own domain):

_xmpp-server._tcp.example.org. 28800 IN SRV 5 0 5269 xmpp-server.l.google.com.
_xmpp-server._tcp.example.org. 28800 IN SRV 20 0 5269 xmpp-server1.l.google.com.
_xmpp-server._tcp.example.org. 28800 IN SRV 20 0 5269 xmpp-server2.l.google.com.
_xmpp-server._tcp.example.org. 28800 IN SRV 20 0 5269 xmpp-server3.l.google.com.
_xmpp-server._tcp.example.org. 28800 IN SRV 20 0 5269 xmpp-server4.l.google.com.
_jabber._tcp.example.org. 28800 IN SRV 5 0 5269 xmpp-server.l.google.com.
_jabber._tcp.example.org. 28800 IN SRV 20 0 5269 xmpp-server1.l.google.com.
_jabber._tcp.example.org. 28800 IN SRV 20 0 5269 xmpp-server2.l.google.com.
_jabber._tcp.example.org. 28800 IN SRV 20 0 5269 xmpp-server3.l.google.com.
_jabber._tcp.example.org. 28800 IN SRV 20 0 5269 xmpp-server4.l.google.com.

At this point, you should be able to chat through Psi without anything new compared to Gtalk besides the warm fuzzy feeling of using a non-Google open-source client to do so.

This is where the previously mentioned tutorials make you connect to 3rd party servers running those “transports” in order to get access to the other IM networks. And this is also where I find the limits of my trusting nature. First, I don’t like giving my credentials for one service to another service. Second, I kinda know who’s behind the services I use on a daily basis (mostly either one of the big three, Microsoft, Google, Yahoo). On the other hand, I have no idea who’s running those Jabber servers (judging from their main websites I’d say it’s a mix of geeky individuals, bored IT technicians, universities, and shady businesses). I don’t really want to give any of those guys my Windows Live or Yahoo passwords… which is why I decided to run my own private Jabber server! (see? I told you it would be geeky and overly complicated!)

Running your own Jabber server

The idea is to run your own Jabber transports so that there’s no 3rd party involved in your communications – only you, your friends, and the IM networks used in between.

Before we go into the tutorial, I’d recommend that you first set up the DNS records for the domains and sub-domains you’ll use for your server, because that will take a bit of time to propagate through the internet and you don’t want to test anything by using some temporary URL or the server’s IP (I’ll tell you why later). In my case, all I needed was an address for the server itself, and another one for the only transport I need so far, an MSN gateway. So I created type A records for “xmpp.mydomain.org” and “msn.xmpp.mydomain.org”.

At first I tried setting up jabberd2 but I found it to be a bit too cumbersome (why does it need 4 services, 4 configuration files, and 4 logs?) and difficult to troubleshoot. I ended up using ejabberd, which had more informative logs and documentation. Note that at this point, I don’t care about performance or scalability since this server will only be for me and maybe a couple of family members.

Setting up ejabberd was pretty easy since you only need to follow their guide, which tells you to add your server domain to the hosts list in ejabberd.cfg:

{hosts, ["xmpp.mydomain.org"]}.

If your server is behind a router, you’ll need to open ports 5222 and 5223 for client connections (standard and legacy SSL), 5269 for server connections, 5280 for HTTP requests, and 8010 for file transfers.

At this point, you should be able to go to the “Service Discovery” window in Psi, type your server’s address, and see a bunch of exposed services (although they will likely be disabled). The example below uses flexion.org which is not my server (I’ll keep it private, thank you very much), and shows a slightly different list of services than a default ejabberd installation… but the important thing is that you know your server is online, with the proper domain.

If it doesn’t work the first time, check ejabberd’s log file (making sure the configuration’s logging level is appropriate for errors and warnings). Your Jabber server may have trouble finding DNS information for your account’s servers (talk.google.com, mostly). In this case, the solution can be found on ejabberd’s FAQ. I used the 3rd listed solution, which is to add the IP addresses of nameservers like OpenDNS to the inetrc configuration file:

{nameserver, {208,67,222,222}}.
{nameserver, {208,67,220,220}}.
{registry, win32}.

Running your own MSN transport

Now at last we can download and install PyMSNt, which seems to be the most widely used MSN transport (oh, and it’s pretty much the only one, too). Once again, follow the installation guide, which will ask you to install Python and Twisted. PyMSNt will actually run in its own Python process, talking to the main Jabber server service via TCP.

PyMSNt’s configuration file, aptly named config.xml, only needs a couple of modifications for the transport to work: set the JID to your transport’s sub-domain (e.g. msn.xmpp.mydomain.org) and set the host to the same value. However, the tricky thing is that if your server is behind a router, the host value should instead be the IP of that machine on your local network (something like “192.168.x.x”).

Then, you need to edit ejabberd’s configuration file to add an entry in the “{listen” section:

{5347, ejabberd_service, [{host, "msn.xmpp.mydomain.org",
                                          [{password, "secret"}]}]},

Restart ejabberd, fire up PyMSNt, and you should see some entries popping up in ejabberd’s log about an external service getting connected, and a new route registered for server “msn.xmpp.mydomain.org”.

Go back to Psi, look at your server’s services again, and you should see an MSN transport available there (the example below uses the jabber.dk server, which actually exposes 4 MSN transports!):

If all is good, you should be able to right click your MSN transport and choose “Register”. You’ll be asked to enter your Windows Live ID, which is all good because that will end up on your own server (and in plain text! Good thing it’s ours, eh?). Then, you’ll be asked to authorize the transport to register itself in your roster.

You should now see, in Psi, a new entry for your MSN transport in your contact list, under the “Agents/Transports” group. You should also pretty quickly start receiving authorization requests from all your MSN contacts. Since there can be a lot of them, you could, just before authorizing the transport, go into Psi’s options to enable “Auto-authorize contacts” (don’t forget to disable it later). Also, don’t worry, it’s only some handshaking stuff on the Jabber side of things – your friends won’t know what you’re up to, except that they’ll probably see you appear and disappear every 5 minutes for a couple hours while you’re testing 🙂

Getting all your contacts in line

Once your contacts are imported, you can casually chat with them and check that they don’t suspect anything. On your side, though, they all have ugly names… things like:

john_doe%hotmail.com@msn.xmpp.mydomain.org

It’s pretty easy to figure out… the first part is their IM handle (here, a Windows Live ID), with “@” replaced by “%”, if any. The second part is “@server.com” to turn this into a proper Jabber ID.

What I recommend doing, at this point, is to rename all your contacts in Psi to give them the same names they have in your Gmail address book. Then, close Psi, go to your Gmail contacts, and use the “find duplicates” feature. It will merge the new contacts (who have only that one weird Jabber ID as their email address) with your existing entries. It should also make those contacts appear as expected in your chat widget, or in Gtalk.

Note that all your contacts’ Jabber IDs are tied to your own Jabber server. This means that if you want to replace a transport by using another one from a different server, you’d get a whole new batch of contacts with IDs ending in the new server’s name. It’s a bit annoying, as it forces you to do some address book management to clean up the old unused IDs, and that’s why I said earlier that it wasn’t a good idea to start testing your server using a temporary URL or an IP address.

Some assembly required

If you’re in the same situation as me, not all of your contacts will show up in Gtalk. Actually, at first, only 2 of my MSN contacts showed up. I had no idea why, but I suspect some funky stuff going on with the very peculiar way Gmail decides who to show in your chat list based on how often you communicate with them (you can somewhat turn that off in the web widget by telling it to show all contacts).

In the end, I removed and re-registered with my MSN transport a few times through Psi, and each time I’d see more online contacts in Gtalk. Go figure…

There are a few other glitches. For example, every time I login with Psi, I get a message through the transport about how I should update to the latest Live Messenger to get new features. I don’t get that in Gtalk, but it’s probably because it doesn’t support messages from transports. Other glitches included problems connecting to the MSN servers or missing avatar pictures, but this is all fixed if you take the latest code from PyMSNt’s repository.

One big drawback, however, is that there doesn’t seem to be a way so far to backup the chat history you have on the cloud. Hopefully, Google will someday extend their data freedom philosophy to Gtalk, but in the meantime, using Psi (or some other similar client) exclusively is the only way to have a copy of your logs. But then again, if that was a big concern, you probably didn’t use Gtalk in the first place.

So far I’ve been using this setup for a week, and it works well. I’ll post an update after a month or so.


Visual Studio Express’ limitations lead to bad practices

Visual Studio Express only has a subset of what you can find in Visual Studio Professional, which makes complete sense, but two missing features actually prevent users from following best programming practices in my opinion. The whole point of Express is to let enthusiasts and students discover the joys of programming – so we might as well let them do it the proper way.

The first limitation is not being able to set an external executable as the “start action” when you debug. In Express, you can only run the debugger on an application project (console, UI, or web application). You can’t set a library project to be the startup project, nor can you attach the debugger to another process. This means that if you want to debug your unit tests, the unit test project must be an executable.

Thankfully, most unit testing frameworks have a console bootstrapper that you can use as your console application’s main loop, but it’s not ideal, and it probably doesn’t incite many people into writing unit tests because they have to figure all this out. More importantly, it breaks down when you can’t create console applications at all, like when you develop with Silverlight (although .NET 4’s assembly compatibility with Silverlight 4 may make things smoother here).

A way to get around that would be to use add-ins like TestDriven.NET, but Express also has a limitation that it doesn’t support add-ins (this actually got TestDriven.NET into some trouble at some point). Other ways to get around it would be to access Visual Studio’s command window, or drive it with VB macros, but Microsoft closed those “exploits” with Express 2008.

The only way to get around those limitations is to use the .NET Framework’s CLR debugger, which ships with the SDK and is a stripped down version of the Visual Studio shell featuring only the debugging features. The problem is obviously that it’s a bare bones debugger that’s completely disconnected from the IDE experience.

The CLR debugger is also the only way to get around the second of Express’ limitations… You’ve probably already encountered a program that does heavy processing on the UI thread instead of in the background, resulting in sluggish user interaction, progress report, and lack of a button for cancelling the operation. Well, it’s pretty difficult to implement it otherwise with Visual Studio Express, because it doesn’t support multi-thread debugging.

Sure you can put breakpoints in code that will run on different threads, and it will break correctly, but you won’t be able to access the “Threads” debug window because there’s none. This means you won’t be able to look at what’s running at the same time, so it’s pretty useless… and it’s a shame because the framework makes it so easy to use thread pools and background workers and such.

It seems critical to me that Microsoft add those missing features to Visual Studio Express as they expand the open-source .NET community with their own projects. It should be as easy as possible for anybody to participate – not just people with a $1200 MSDN subscription. But the reality is that most of those open-source projects aren’t usable on anything else than the Professional versions (which opens up another debate regarding the culture of the .NET community, but that’s for another time). Of course, people could still use other IDEs like SharpDevelop, but then what’s the point of having those Express editions? I’m sure Microsoft would be happy to keep all those students leaning on Windows and Visual Studio – as they should. So if Microsoft is committed to a free, functional and educational version of Visual Studio, I think they would have to fix the feature set appropriately.