The Stochastic Game

Ramblings of General Geekery

Writing a custom Main() method for WPF applications

Creating a new WPF project in Visual Studio gives you the following pretty simple application markup and code:

<Application x:Class="WpfApplication2.App"
namespace WpfApplication2
    public partial class App : Application

Understanding how it really works, and how to supply your own custom Main() method, is just a search query away. You basically need to change the application’s build action from “Application Definition” to “Page”, create a constructor that calls “InitializeComponent”, and write your Main() by eventually calling one of the application’s “Run” method overloads.

namespace WpfApplication1
    public partial class App : Application

        static void Main()
            Window1 window = new Window1();
            App app = new App();

Don’t forget, also, to remove the “StartupUri” from the App.xaml, otherwise another copy of window will show up (unless you get an error because the URI points to a non existing XAML resource).

What most articles won’t tell you, though (some of them actually getting it wrong), is that it’s important that you create the application before you create your first window. To illustrate this, let’s add an application resource to App.xaml:

        <Style TargetType="Button">
            <Setter Property="Background" Value="Red" />

It’s a style that makes buttons red. Since it’s defined at application level, all buttons in all the windows should be red (except those that define their own local style override). For example, here’s the markup for my Window1:

<Window x:Class="WpfApplication1.Window1"
    Title="Window1" Height="300" Width="300">
        <Button>Is This Red?</Button>

We should see a window with a red button in it. But when I run the project, I get this:

Well… it’s not red.

The issue is that the window is created before the application. This means that when it queries the current application to get the globals resources, it finds nothing. What you need to do is simply create the application first:

static void Main()
    App app = new App();
    Window1 window = new Window1();

By simply switching those two lines, you get the expected result:

This drove me crazy for an hour or so – I thought there was something funky going on with my theme management code or something. Hopefully this will save someone some pain.

Consolidate instant messaging accounts into your Gmail

Everybody knows that Gmail is great for consolidating multiple email accounts into one place that’s easy to search, organize, backup, and get out of. What less people know is that it’s also a great place to consolidate your instant messenger accounts, too!

Watch out, this article is pretty long and gets quite nerdy at the end.

Some background information (and a few rants)

We’re going to talk about merging accounts from different instant messaging services (Gtalk, MSN, ICQ, etc.) so let’s get this out of the door first: yes, I could use programs like Pidgin or Trillian to log into all those networks simultaneously, but I’d still have to search in at least two places when it comes to past communications, and that’s without considering problems like chat history being only stored locally on one computer, which means I then have to sync that with my other computers using Dropbox or Mesh. Also, it’s really way to simple for my tastes. There’s a better, more complicated and geek-fulfilling way.

Google made the very good decision of using Jabber, a.k.a. XMPP, an open protocol, to implement their own instant messaging system Gtalk. As usual with Google, though, they didn’t quite follow the standard entirely but it’s compatible enough for what I need… mostly. The other good thing with Google is that they integrated the whole thing into Gmail so that chats are searchable along with emails, which is what I’m after, here. Some people may be uncomfortable with the privacy implications, but those people probably don’t use Google services anyway (why would you trust them with your emails or videos or pictures but not chats?). In fact, people worried about privacy probably don’t use many web services in general, unless they’re one of those weirdoes who actually read the whole terms of services and really compare them (I don’t even know if such weirdoes exist). Besides, when you start worrying about privacy, you generally end up setting up your own email server, which then makes you worry about other things like backup, whitelisting/greylisting, encryption, etc… Anyway.

So what then? Well, the XMPP protocol has things called “transports” who basically translate to and from other IM networks like MSN, Yahoo and others. That’s the way we’ll consolidate all our IM networks into Gmail!

There are a few tutorials out there that explain how to set that up, so I’ll quickly walk through the first steps and then get to what I did differently, which is the good part.

Setting up Psi IM

Go and get Psi, a cross-platform instant-messenger client that has one of the most complete feature sets out there when it comes to Jabber magic.

Create an account in Psi that will connect to Gtalk. You can follow one of the previously linked tutorials for that, or Google’s own instructions. Apart from the login/password, it mostly comes down to the following settings:

If you’re using Google Apps, you’ll also have to add a few records to your DNS zone file. The instructions are provided by Google (you obviously need to replace “” with your own domain): 28800 IN SRV 5 0 5269 28800 IN SRV 20 0 5269 28800 IN SRV 20 0 5269 28800 IN SRV 20 0 5269 28800 IN SRV 20 0 5269 28800 IN SRV 5 0 5269 28800 IN SRV 20 0 5269 28800 IN SRV 20 0 5269 28800 IN SRV 20 0 5269 28800 IN SRV 20 0 5269

At this point, you should be able to chat through Psi without anything new compared to Gtalk besides the warm fuzzy feeling of using a non-Google open-source client to do so.

This is where the previously mentioned tutorials make you connect to 3rd party servers running those “transports” in order to get access to the other IM networks. And this is also where I find the limits of my trusting nature. First, I don’t like giving my credentials for one service to another service. Second, I kinda know who’s behind the services I use on a daily basis (mostly either one of the big three, Microsoft, Google, Yahoo). On the other hand, I have no idea who’s running those Jabber servers (judging from their main websites I’d say it’s a mix of geeky individuals, bored IT technicians, universities, and shady businesses). I don’t really want to give any of those guys my Windows Live or Yahoo passwords… which is why I decided to run my own private Jabber server! (see? I told you it would be geeky and overly complicated!)

Running your own Jabber server

The idea is to run your own Jabber transports so that there’s no 3rd party involved in your communications – only you, your friends, and the IM networks used in between.

Before we go into the tutorial, I’d recommend that you first set up the DNS records for the domains and sub-domains you’ll use for your server, because that will take a bit of time to propagate through the internet and you don’t want to test anything by using some temporary URL or the server’s IP (I’ll tell you why later). In my case, all I needed was an address for the server itself, and another one for the only transport I need so far, an MSN gateway. So I created type A records for “” and “”.

At first I tried setting up jabberd2 but I found it to be a bit too cumbersome (why does it need 4 services, 4 configuration files, and 4 logs?) and difficult to troubleshoot. I ended up using ejabberd, which had more informative logs and documentation. Note that at this point, I don’t care about performance or scalability since this server will only be for me and maybe a couple of family members.

Setting up ejabberd was pretty easy since you only need to follow their guide, which tells you to add your server domain to the hosts list in ejabberd.cfg:

{hosts, [""]}.

If your server is behind a router, you’ll need to open ports 5222 and 5223 for client connections (standard and legacy SSL), 5269 for server connections, 5280 for HTTP requests, and 8010 for file transfers.

At this point, you should be able to go to the “Service Discovery” window in Psi, type your server’s address, and see a bunch of exposed services (although they will likely be disabled). The example below uses which is not my server (I’ll keep it private, thank you very much), and shows a slightly different list of services than a default ejabberd installation… but the important thing is that you know your server is online, with the proper domain.

If it doesn’t work the first time, check ejabberd’s log file (making sure the configuration’s logging level is appropriate for errors and warnings). Your Jabber server may have trouble finding DNS information for your account’s servers (, mostly). In this case, the solution can be found on ejabberd’s FAQ. I used the 3rd listed solution, which is to add the IP addresses of nameservers like OpenDNS to the inetrc configuration file:

{nameserver, {208,67,222,222}}.
{nameserver, {208,67,220,220}}.
{registry, win32}.

Running your own MSN transport

Now at last we can download and install PyMSNt, which seems to be the most widely used MSN transport (oh, and it’s pretty much the only one, too). Once again, follow the installation guide, which will ask you to install Python and Twisted. PyMSNt will actually run in its own Python process, talking to the main Jabber server service via TCP.

PyMSNt’s configuration file, aptly named config.xml, only needs a couple of modifications for the transport to work: set the JID to your transport’s sub-domain (e.g. and set the host to the same value. However, the tricky thing is that if your server is behind a router, the host value should instead be the IP of that machine on your local network (something like “192.168.x.x”).

Then, you need to edit ejabberd’s configuration file to add an entry in the “{listen” section:

{5347, ejabberd_service, [{host, "",
                                          [{password, "secret"}]}]},

Restart ejabberd, fire up PyMSNt, and you should see some entries popping up in ejabberd’s log about an external service getting connected, and a new route registered for server “”.

Go back to Psi, look at your server’s services again, and you should see an MSN transport available there (the example below uses the server, which actually exposes 4 MSN transports!):

If all is good, you should be able to right click your MSN transport and choose “Register”. You’ll be asked to enter your Windows Live ID, which is all good because that will end up on your own server (and in plain text! Good thing it’s ours, eh?). Then, you’ll be asked to authorize the transport to register itself in your roster.

You should now see, in Psi, a new entry for your MSN transport in your contact list, under the “Agents/Transports” group. You should also pretty quickly start receiving authorization requests from all your MSN contacts. Since there can be a lot of them, you could, just before authorizing the transport, go into Psi’s options to enable “Auto-authorize contacts” (don’t forget to disable it later). Also, don’t worry, it’s only some handshaking stuff on the Jabber side of things – your friends won’t know what you’re up to, except that they’ll probably see you appear and disappear every 5 minutes for a couple hours while you’re testing 🙂

Getting all your contacts in line

Once your contacts are imported, you can casually chat with them and check that they don’t suspect anything. On your side, though, they all have ugly names… things like:

It’s pretty easy to figure out… the first part is their IM handle (here, a Windows Live ID), with “@” replaced by “%”, if any. The second part is “” to turn this into a proper Jabber ID.

What I recommend doing, at this point, is to rename all your contacts in Psi to give them the same names they have in your Gmail address book. Then, close Psi, go to your Gmail contacts, and use the “find duplicates” feature. It will merge the new contacts (who have only that one weird Jabber ID as their email address) with your existing entries. It should also make those contacts appear as expected in your chat widget, or in Gtalk.

Note that all your contacts’ Jabber IDs are tied to your own Jabber server. This means that if you want to replace a transport by using another one from a different server, you’d get a whole new batch of contacts with IDs ending in the new server’s name. It’s a bit annoying, as it forces you to do some address book management to clean up the old unused IDs, and that’s why I said earlier that it wasn’t a good idea to start testing your server using a temporary URL or an IP address.

Some assembly required

If you’re in the same situation as me, not all of your contacts will show up in Gtalk. Actually, at first, only 2 of my MSN contacts showed up. I had no idea why, but I suspect some funky stuff going on with the very peculiar way Gmail decides who to show in your chat list based on how often you communicate with them (you can somewhat turn that off in the web widget by telling it to show all contacts).

In the end, I removed and re-registered with my MSN transport a few times through Psi, and each time I’d see more online contacts in Gtalk. Go figure…

There are a few other glitches. For example, every time I login with Psi, I get a message through the transport about how I should update to the latest Live Messenger to get new features. I don’t get that in Gtalk, but it’s probably because it doesn’t support messages from transports. Other glitches included problems connecting to the MSN servers or missing avatar pictures, but this is all fixed if you take the latest code from PyMSNt’s repository.

One big drawback, however, is that there doesn’t seem to be a way so far to backup the chat history you have on the cloud. Hopefully, Google will someday extend their data freedom philosophy to Gtalk, but in the meantime, using Psi (or some other similar client) exclusively is the only way to have a copy of your logs. But then again, if that was a big concern, you probably didn’t use Gtalk in the first place.

So far I’ve been using this setup for a week, and it works well. I’ll post an update after a month or so.

Visual Studio Express’ limitations lead to bad practices

Visual Studio Express only has a subset of what you can find in Visual Studio Professional, which makes complete sense, but two missing features actually prevent users from following best programming practices in my opinion. The whole point of Express is to let enthusiasts and students discover the joys of programming – so we might as well let them do it the proper way.

The first limitation is not being able to set an external executable as the “start action” when you debug. In Express, you can only run the debugger on an application project (console, UI, or web application). You can’t set a library project to be the startup project, nor can you attach the debugger to another process. This means that if you want to debug your unit tests, the unit test project must be an executable.

Thankfully, most unit testing frameworks have a console bootstrapper that you can use as your console application’s main loop, but it’s not ideal, and it probably doesn’t incite many people into writing unit tests because they have to figure all this out. More importantly, it breaks down when you can’t create console applications at all, like when you develop with Silverlight (although .NET 4’s assembly compatibility with Silverlight 4 may make things smoother here).

A way to get around that would be to use add-ins like TestDriven.NET, but Express also has a limitation that it doesn’t support add-ins (this actually got TestDriven.NET into some trouble at some point). Other ways to get around it would be to access Visual Studio’s command window, or drive it with VB macros, but Microsoft closed those “exploits” with Express 2008.

The only way to get around those limitations is to use the .NET Framework’s CLR debugger, which ships with the SDK and is a stripped down version of the Visual Studio shell featuring only the debugging features. The problem is obviously that it’s a bare bones debugger that’s completely disconnected from the IDE experience.

The CLR debugger is also the only way to get around the second of Express’ limitations… You’ve probably already encountered a program that does heavy processing on the UI thread instead of in the background, resulting in sluggish user interaction, progress report, and lack of a button for cancelling the operation. Well, it’s pretty difficult to implement it otherwise with Visual Studio Express, because it doesn’t support multi-thread debugging.

Sure you can put breakpoints in code that will run on different threads, and it will break correctly, but you won’t be able to access the “Threads” debug window because there’s none. This means you won’t be able to look at what’s running at the same time, so it’s pretty useless… and it’s a shame because the framework makes it so easy to use thread pools and background workers and such.

It seems critical to me that Microsoft add those missing features to Visual Studio Express as they expand the open-source .NET community with their own projects. It should be as easy as possible for anybody to participate – not just people with a $1200 MSDN subscription. But the reality is that most of those open-source projects aren’t usable on anything else than the Professional versions (which opens up another debate regarding the culture of the .NET community, but that’s for another time). Of course, people could still use other IDEs like SharpDevelop, but then what’s the point of having those Express editions? I’m sure Microsoft would be happy to keep all those students leaning on Windows and Visual Studio – as they should. So if Microsoft is committed to a free, functional and educational version of Visual Studio, I think they would have to fix the feature set appropriately.

My home media & entertainment setup

I was working on this article when I spotted that my friend Bertrand Le Roy posted on that very same subject so I’ll turn this into a reply to his. The new year seems like a good time for bragging about one’s home video setup, it seems.

First, you may notice that my setup is quite simple because I don’t have any audio gear. Yet. That’s because until recently, my apartments were too small for me to have any decent speakers.

Home Theatre PC (XBMC Dashboard)


I’m still using an HD PVR (bottom left in the photo above) provided by my cable & internet provider, unlike some people who built their own PVR or cancelled their cable subscription altogether. The reason is that up here in Canada, we don’t have a lot of options for streaming or downloading legal content yet. It’s particularly frustrating because we get channels like NBC and such that proud themselves in telling you, at the end of the show, how you can go on their website to watch episodes you missed… only to get a cold “sorry, this content is not available in your country”. Anyway…

So I have an HD PVR for recording shows. I never, ever, watch live TV. Having to wait through commercials is too painful. I’m looking forward for the time when there will be no TV channels anymore – just direct subscriptions and downloads from content producers, similar to how we don’t listen to music through the radio anymore. Well, some people still do, but I don’t, and that’s made possible by the new platforms. Now if only a Zune Marketplace could open in Canada… but I digress again.

Media Center PC

For the digital content we do have access to in Canada, and for my growing collection of converted DVDs and CDs, I have a Media Center PC.

Similarly to how I don’t watch live TV anymore, I’m in the process of not watching real DVDs anymore (all my CDs have already been converted a long time ago). Because yes, it takes too much time to locate the correct case, open it, remove whatever disc was previously in the tray, locate that other case, put the disc back, put the case back, put the new disc in, load it up, wait through the incredibly enraging and often non skippable segments including very long logo intros, anti-piracy shorts, and sometimes even commercials, and then wait for the stupid main menu animation to finish so you can click on “Play Movie”. I prefer to just locate what I want to watch in a list and press “OK”, thank you very much.

Wow, I keep digressing. Sorry.

Home Theatre PC (Close-Up)


So this, above, is my Media Center PC (you can spot it at the bottom right in the first picture). I built it last spring, and if you want all the facts, here’s what’s inside the Antec Fusion V2 case:

  • Zotac motherboard with a GeForce 9300 chipset and (among other connectors) HDMI output.
  • Intel Core 2 Duo CPU at 3.06GHz.
  • 4Gb RAM.
  • Seagate Momentus 5400.6 160Gb (this is a laptop 2.5” hard drive, for minimum noise).
  • Lite-On iHES106-29 DVD/Blu-Ray.
  • Logitech Cordless Desktop S520 Wireless Keyboard and Mouse.

Home Theatre PC (Hardware)

For roughly the same price as a Mac Mini, I get a bit more power, more RAM, a Blu-Ray drive, 1080p HDMI output, wireless keyboard and mouse, an IR receiver that works with any remote, and the freedom to upgrade it over the next years (what I don’t get is WiFi or customer support though). Sure, the Antec case is huge and not so sexy, but it’s quiet as hell. The fridge from the kitchen is noisier than the PC, actually.


The center piece is XBMC.

Home Theatre PC (XBMC)

Oh, God, this is one beautiful piece of software. And it becomes truly awesome when you slap a skin like Aeon on it (although I need to upgrade and checkout the new 9.11 default skin).

I’ve been using XBMC for several years, all the way back to my original modded XBox, and although I sometimes look back, I never find any other similar program that’s even half as good. XBMC is just the right balance between a user-friendly “it just works” and an open-source “tweak the hell out of it”.

Home Theatre PC (Zune)

For my music, I prefer to use the Zune Software as a player. This is another beautifully designed program – at least in the graphic department. It just makes any other player look like an ugly Excel spreadsheet.

To manage my music library, however, the best thing I’ve found so far is MediaMonkey. This program alone, which I bought in “gold” version, is the reason I don’t have a Linux installed on my Media Center PC. It makes it easy to manage and maintain a huge music collection when you’re obsessive about tags, album covers and file names.

Other programs include Boxee, which is useful for some things but not quite as appealing outside of the US.

File Server

The Media Center PC itself doesn’t store any data beside its OS and programs. All the data (music, pictures, videos) is stored on a Netgear ReadyNAS NV+ with four 1TB hard drives in RAID (which gives me around 2.7TB of space).

File Server

Note the USB hard drive next to it. It receives daily backups of the important data… but I’ll probably write another post sometimes about my data storage and backup strategy (it’s an even geekier bragging subject!).

The ReadyNAS has a decent community going on, and since it’s running some kind of Linux, it’s easy to mess around with it if you’re not afraid to lose your warranty.


For gaming, I have the obligatory Xbox360 (featured on the left on the first picture), and a Wii I won at a raffle (about the only time I ever won something) (featured on the right on the first picture).

I don’t use my Xbox360 as a media extender or anything because it’s noisy and, unlike my Media Center PC, needs to be turned on and off (the failure rate on this console is bad enough that you probably don’t want to keep it running all the time!). I however recently bought a couple of videos off the marketplace on it so it may take a more prominent role in the future.

Bringing it all together

Because I obviously want to control all this from my couch without thinking about it too much, I have a Logitech Harmony 510 universal remote.

Logitech Harmony 510

This is not the kind of universal remote where you need to press some switch button every time you need to control a different device. It handles things per “activity”, which means all the buttons can be mapped to various devices so that, well, you don’t need to think about it – “Menu” displays the menu, “Volume Up” increases the volume, etc., whatever device that means controlling.


And that’s it! The next steps are obviously to add some nice sound system, and finish ripping all those DVDs (which includes fighting the region lock crap because I bought some of my DVDs back when I lived in Europe).

Exposing global variables in IronPython

Lately I’ve been playing around a bit with IronPython and how to embed it inside an application to add scripting features. I’m still figuring things out, but I had a hard time exposing global variables to the Python environment.

The idea was to expose a couple of .NET objects (mainly a few important managers/singletons from the app’s API) as global variables so that scripts could access and act on the important parts of the app (query the database, batch run actions, etc.).

At first, I exposed some objects as variables of my ScriptScope:

public void SetupScope(ScriptScope scope)
    scope.SetVariable("test_string", "This is a test string");
    scope.SetVariable("test_callback_method", new Action(TestCallbackMethod));
    scope.SetVariable("test_callback_function", new Func<string, string>(TestCallbackFunction));

The problem was that only interactive scripting would get access to those variables (I had a simple UI for typing commands to be executed on that ScriptScope). Using “test_string” in a function loaded from a module would result in a “name ‘test_string’ is not defined” error. Using either “import” or “global” would not fix it.

I then discovered the ScriptRuntime.Globals property, and tried to add something there instead.

public void SetupGlobals(ScriptEngine engine)
    engine.Runtime.Globals.SetVariable("test_global", "This is a test global variable.");

This didn’t quite work either, and was actually a step backwards: now I couldn’t even access this “test_global” variable from the interactive command line!

That’s until I tried the following:

import test_global

Now I could access my global variable! And using that import statement from my modules also successfully imported it into their scope. Yay!

I’m not quite sure why the import statement is working, and why the other things didn’t, but I’m a Python newbie so it’s not surprising. Reading the Python documentation, though, tells me “import” is used to import packages, not variables, so it remains a mystery to me… but at least, it works on my machine for now!

Some more contacts love

There’s been a lot of improvement in communications in the past few years, from better services to brand new ones, but I still feel like contact management is lagging behind. I mean, isn’t it important to be able to find how to contact somebody in the first place?

Here are a few things I think could be better.

People have a lot of email addresses

With all the storage space provided by modern email services, and philosophies like GMail’s that advocate not deleting your email, you may have pretty old messages in your account. For instance, mine goes back to 1999 (and sometimes I wish I had thought about keeping my email before then because it’s fun to read those old conversations). This means some of your friends and coworkers may have changed jobs and personal email providers a few times. Some of my contacts have up to a dozen email addresses, actually.

Yahoo! Mail and GMail offer a dynamic list of email addresses where you can add as many as you want for a person, which is good, but Live Mail only gives you a static list of “home”, “work”, “IM” and “other”, which sucks. GMail lets you describe each address as “home”, “work” or “other”, but suprisingly doesn’t let you define custom descriptions. Yahoo! Mail doesn’t have descriptions so you have to guess which goes where.

One thing missing in all 3 services, and that I strongly wished existed, is the ability to tag specific addresses as “deprecated”: if one of your friends isn’t using that old AOL address anymore, you don’t want it ever suggested to you when you compose a  new message. However, you still want this address to be in the system for when you search old conversations with that contact.

Identity Profiles

Going beyond all this, I sometimes wish email providers would evolve from the old contact model of “name/email/address/notes” probably defined in the early days of Lotus Notes or something. The same way OpenID or InfoCard have “identity profiles”, each with its own set of contact information (name, email, address, website, etc.), contact management could also feature such profiles.

“Work” and “Home” profiles would be the most obvious ones, and would let the user tie together a set of previously unrelated information: right now, contacts may have email addresses for work and home, and IM nicknames for work and home, but they’re in separate lists, with no way to tie them together. Besides, as far as I can tell, no email provider even offers the ability to tag an IM nickname as being for “work” or “home” anyway.

The identity profile paradigm could then be used in powerful ways by client applications. For instance, the “work” profile would be the first suggestion on weekdays between 9am and 5pm, but the “home” profile would take over on week-ends and week-day evenings.

Don’t notify, just change

You probably know how much of a pain it is to notify everyone you know that you have a new email address, home address, and/or phone number. In this age of feeds and automatic updates, it’s weird that there’s no fancy technology with a hard to pronounce name that would do just that. Still, several people like Douglas Purdy or Tim Bray have been thinking about something like the following for a year or two now (that’s a lot in internet age!), on which I expand a bit.

The idea would be to use something like the hCard standard, a combine it with some RSS or PubSubHubbub magic: people who want to contact you would pull or get pushed your information instead of storing a completely different (and possibly out-of-date) version in their address book. Since the information is defined by you (via your email provider, your own server, or some 3rd party like for OpenID), it’s always up to date. You only have to change it once in one place and everybody else uses the updated version from then on.

It would especially be awesome if that kind of technology was used by governments, banks, ISPs, phone and cable operators, and all those guys that you always have to write to whenever you move.

Also, if coupled with identity profiles, you could have scenarios where somebody going away on holidays would update his information by making his “on the move” profile the main one, and all the other ones (especially the “work” one) disabled temporarily. If the whole system is correctly designed, this would let us avoid those awful “Sorry, I’m in Hawaii until next month” emails, because the client application would already know all this before you send anything. It would also ideally work not only with email but other protocols as well (IM, etc.).

Clever guy needed

That’s pretty much it. Now I just need a clever guy to make this happen because, well, like most programmers, I’m lazy. Besides, somebody probably had all those ideas and more, already. Still, I’d like my GMail inbox to get better.

Fun with jQuery: the vertical “Coda” slider

Update: my personal website has, since this article, been redesigned and does not feature this technique anymore. The demo page is still available however. It was broken for a while, but should be working again now.

I recently published the new version of my personal website and you’ll have no problem figuring out that I had some fun with jQuery. Probably a bit too much fun, actually, but hey, a personal website is supposed to be just usable enough that you can contact the owner without hassle.

My first approach to the website was a mix of good practices and totally blasphemous process:

  • I wanted a simple website that only lists ways to get in touch with me (Facebook, LinkedIn, and all that social web stuff), and ways to follow what I’m doing online (my blogs, twitter updates, and all that other social web stuff). It would additionally have a contact form so that people could send me a quick message without any extra steps.
  • I also wanted a website with some funky jQuery shit going on. I wanted to learn the API along with vanilla Javascript.

After some research about what jQuery can and cannot do, and a few sketches on my trusted notebook, I came up with a totally revolutionary (in the post-Web 2.0/post-iPhone 21st century definition of the term1) idea: the vertical “Coda” slider!

This UI pattern has been popularized by Panic’s website for their Coda software. It features a panel in which pages slide in and out when you click on their title, displayed in some kind of list or menu bar. There’s a nice tutorial on “jQuery for designers that explains how to reimplement it. The only difference is that I wanted the pages to slide in and out vertically instead of horizontally. You may point out that the aformentioned tutorial features an option to do just that, but it’s not quite the way I wanted. I want it to look like a standard page until you click one of the navigation links, which is when it goes “shwoop” and you go “wow I did not expect that!”.

No need to go through a lengthy tutorial, as you can probably figure how it works by just looking at the code, but still, to make it easier on you I created a “demo” version of the page with placeholder content and none of the other bouncy crap going on. Go check it out if you want to steal some of it, although I still have a couple of little quirks to fix, especially with window resizing.

  1. Which means “not very revolutionary”. ↩︎

Some similarities between Apple and Steve Jackson Games

Apple is company whose boss is a guy named Steve who is, by reputation, quite charismatic but also a real asshole when it comes to working with him and using his intellectual property. Their main product gives them only a small fraction of the market, and its core of devoted fans can be loyal up to a rather fanatical point. This product is always set against the more popular product, which is seen as outdated, inferior, over-marketed, and riddled with product updates that break compatibility with silly new features. Flamewars about which product is better are frequent. Apple’s product supposedly covers everything you may need, although fans still usually spend large amounts of money to get add-ons and accessories. However, the other product is still the dominant one by far, and most beginners start with it. Ironically, Apple’s most successful product is a small and fun “side” product. It has seen several iterations and lots of additional products are available.

Steve Jackson Games is company whose boss is a guy named Steve who is, by reputation, quite charismatic but also a real asshole when it comes to working with him and using his intellectual property. Their main product gives them only a small fraction of the market, and its core of devoted fans can be loyal up to a rather fanatical point. This product is always set against the more popular product, which is seen as outdated, inferior, over-marketed, and riddled with product updates that break compatibility with silly new features. Flamewars about which product is better are frequent. Steve Jackson Games’ product supposedly covers everything you may need, although fans still usually spend large amounts of money to get add-ons and accessories. However, the other product is still the dominant one by far, and most beginners start with it. Ironically, Steve Jackson Games’ most successful product is a small and fun “side” product. It has seen several iterations and lots of additional products are available.

Experimental IronCow branches

I created 2 experimental branches for future versions of IronCow.

  • IronCow Mobile” is a branch that adds support for the .NET Compact Framework. Thanks to jwboer for the initial patch.
  • IronCow Local Search” is a branch that adds local search for tasks. We basically cache all the tasks in memory, and handle search queries locally, instead of sending a request to the RTM server and parsing the response markup. The lexical analysis and AST building of the search query is a bit dodgy, as I can’t get a proper tool like ANTLR to work with RTM’s search grammar (probably me doing something wrong), but it’s not too much of a problem right now since search queries tend to be quite short, and we already are significantly faster than a web request.

Check them out!

Target the .NET Compact Framework using Visual Studio Express

Microsoft only supports Visual Studio Professional for developing Windows Mobile applications and, more generally, code based on the .NET Compact Framework. You get nice things like application deployment and emulators and remote debugging and all. But if you just want to compile something against the .NET Compact Framework, for example to check that you’re using supported methods and classes, you can do that with Visual Studio Express.

Create a project in Visual Studio Express and open it in a text editor. In the first “<PropertyGroup>” node, add the following at the end:

   1: <NoStdLib>true</NoStdLib>

This will tell MSBuild to not include mscorlib.dll automatically, so we can make it use the Compact Framework’s version.

Next, re-open the project in Visual Studio Express, delete all the system references (System.dll, System.Xml.dll, System.Data.dll, etc.), and re-add them, only this time use the Compact Framework’s assemblies. You’ll have to directly browse to those DLLs, as they probably won’t show up in the default dialog.

Now rebuild you application. It should build against the Compact Framework. You can test that by adding an instruction that’s unsupported, like for example “Thread.Sleep(TimeSpan)”.