The Stochastic Game

Ramblings of General Geekery

Themes in PieCrust

I just pushed a lot of changes to the dev branch of PieCrust, including the new support for themes. The point of themes is to make it easy to change your website’s appearance by further separating content and look.

Here’s an early look at how themes work, so that anybody can play with it and provide feedback. Not everything is in place yet, so now’s the best time to affect the design.

The theme folder

When a website is using a theme, that theme will be placed in the _content/theme folder.

The theme itself is really just another PieCrust website: it has its own _content folder with pages and templates and everything you expect. The only differences are:

  • The configuration file is named _content/theme_config.yml instead of _content/config.yml. This is so chef is not confused as to what’s the root of the current website when you go into the theme folder.
  • A theme should have a theme_info.yml file at the root of the theme. It’s a YAML file with, at minimum, a name and description.

The theme behaves as follows:

  • Pages defined in the theme (in _content/pages, like any other PieCrust website) are added to the base website, unless a page with the same URL has been defined there. It makes it possible for themes to define pages like an “About” page or some blog archives. They effectively complement the website on which the theme is applied.
  • The theme’s templates directories (site/templates_dirs setting) are added before the default _content/templates directory, but after any other custom directory defined by the website. It makes it possible for themes to override the default templates, and for users to override a template defined in a theme.

The themes command

A new themes command is available in chef. It looks a lot like the plugins command in the sense that is offers the same 3 sub-commands: info, find and install. Right now, however, there are no themes in the default repository, so chef themes find won’t return anything, which means there’s no theme to install.

You could setup a local repository by setting site/themes_sources to /path/to/my/themes, where that path contains one or more themes in sub-directories. You can then install one of your local themes by running chef themes install <name>, which basically just copies the theme inside _content/theme.

For faster development, however, I would recommend just sym-linking your theme’s root directory to _content/theme.

Quickstart

To summarize, here’s how you can write a theme for PieCrust at the moment:

  • Create a PieCrust site as usual.
  • Rename _content/config.yml to _content/theme_config.yml.
  • Add a theme_info.yml file at the root. This is a YAML file, just like
    config.yml.
    • Give it a name.
    • Give it a description.
  • Write pages, templates, CSS stylesheets and so on.
    • It’s a good idea to implement standard pages like _index.html,
      _tag.html and _category.html.
    • It’s also good to implement standard templates like default.html and
      post.html.

When you’re happy, symlink – or copy – the theme’s directory to a website’s _content/theme directory. Play around by adding new pages and posts to that site.

As always, report issues on the BitBucket or Github issue trackers.


Todon’t

Jeff Atwood posted another one of his controversial, opinionated articles on his blog, this time about To-Do lists. It’s a long rant about the failure of To-Do lists.

As a former Remember The Milk user and fan, I can totally relate. I just stopped using To-Do lists altogether a couple years ago. I just didn’t need them anymore – I knew what I needed to do most of the time:

If you can’t wake up every day and, using your 100% original equipment God-given organic brain, come up with the three most important things you need to do that day – then you should seriously work on fixing that.

But the truth is: I’m still using some form of list, especially since Trello came around. It took me a while to realize the difference between the Getting-Things-Done-ish productivity I tried to achieve in the past, and the more zen-like – and effective – process I have now.

Patrick Rhone eventually wrote it for me:

[…] increasingly, my to do list is full of the things I park there that otherwise get in the way of what I’m actively focused on.

My To-Do lists (hosted on Trello) are filled with stuff I don’t want to do right now, but need to remember for later. I know what I need to be working on right now, but I may forget about stuff I may want to do later. It’s more like a notebook than a To-Do list, really, but the To-Do list format makes it easier to cross things out if they become invalid or if I already did it.


The post-PC era

You can see this kind of headline all over the web these days, especially with Apple fanboy tech bloggers: the PC is dead, all hail tablets and smartphones. The argument is also made for video games consoles, who are supposedly on the way out to be replaced by, guess what, tablets and smartphones. Even Jeff Atwood is getting on the bandwagon.

I don’t disagree with the facts here: most indicators we have on the market right now show that, indeed, desktop and laptop computers have declining sales while mobile products have an ever-accelerating growth.

Untitled

Some tech bloggers, however, are a bit too quick to equate opposing trends with replacement – in reality, people still own PCs and Macs, but complement them with mobile devices. As far as I know, there’s no evidence that anybody is actually getting rid of their laptops and desktop computers after buying an iPad… yet.

Main device of choice

Interestingly enough, Jeff Atwood recently posted an article about the ASUS Zenbook Prime (which, incidentally, is the laptop I bought my wife) as being “the last laptop he may ever buy”. He argued that, more often than not, he would choose a tablet or smartphone when deciding what device to use or take with him, leaving the laptop behind (which sounds dubious from a guy who must spend most of his time in Visual Studio, but whatever).

This, indeed, points at actual replacement. But look at the way he supposedly makes his decision:

  • Want 10 hours of real world battery life?
  • Want to start doing stuff immediately?
  • Want the smallest most portable device you can get away with?
  • Want to be always connected to the Internet?
  • Want easy access?

Those are never the questions I have in mind when deciding what to put in my backpack, or what to take with me around the house. Those are just specs. Some are laughable (seriously, you can’t wait 2 seconds for the laptop to resume from sleep?), and some are kind of obvious (well yes, I won’t take anything too bulky or impractical on the bus, but that’s as applicable to a laptop as it is to a large book, my drawing equipment, or my bass guitar, which I would otherwise very like to take with me always), but they’re all besides the point.

Instead, there’s only one question I ask myself: what do I want to do? And then just take what lets me do exactly that, given the environment in which I intend to do it.

For me, this is almost always the laptop – and I’m not the only one. Some of it is my own personal tastes and workflows (the need for a real keyboard, a big enough screen, a connection to my file server), some of it is because of the software I need to run (Lightroom, Visual Studio or any other development environment, Mercurial or Git, Pro Tools), and some of it is the necessities of the task at hand (using a Wacom tablet or Cintiq, using a high-speed audio interface with several music instruments and microphones plugged-in). All of this basically prevents me from using a tablet even if I wanted (which I don’t really).

What goes in the bag?

The only task for which I consistently pick the tablet is consuming content, like catching up with my “to read/to watch” list, going through social networks and feeds, and enjoying comicbooks.

It’s also the only device I take with me to bed or when leaving the house. Is that a shortcoming of laptops then? Absolutely not. I never planned to write code or tag photos or record music in a crowded subway before, and that hasn’t changed. What I would plan to do is read something, and that’s why I always carried a couple books with me all the time.

Now I just carry one small tablet. It’s not because it’s small or because it has better battery life or because it turns on right away. I have it because it’s the best digital replacement for books (I don’t carry a Kindle or other e-book reader because I tend to read a lot of comics or other illustrated books like RPG books).

For me, the tablet is a definitive improvement (less physical books cluttering the house, lighter and smaller backpack, more pleasant reading experience), but it has barely replaced anything I do on the laptop. The only thing it did replace is Google Reader/Facebook/Twitter in a browser.

I also hear a lot of people saying a phone or tablet is better for quickly checking emails, or looking up something on the web, or whatever. Although I did use my mobile devices for that at the beginning, I found that it was not that much of an improvement. This is because:

  • I stopped compulsively checking my emails and notifications through the day. Now I’m only doing that a few times a day, which means that when I do I will probably have several things to delete/archive/reply to, and that’s just quicker to do on the laptop (you do use keyboard shortcuts, do you?).
  • I have a lot of productivity shortcuts set up on my system. I can look something up on a map or on Wikipedia or IMDB or whatever in less time on the laptop than on my iPad. I know, I actually tested that.

One of the exceptions is looking up recipes. My wife got me a fridge stand for the iPad for when I’m cooking. It’s also easier to clean grease marks on a screen than on a keyboard. But, again, that didn’t replace using a laptop: before I would have a recipe book open on the counter.

The post-PC era

Now, we can try to imagine how this is going to change in the next few years. I wouldn’t be surprised if, at some point, your phone could be paired with an external display, on top of a mouse and keyboard, and get into a mode where it’s running a full-fledged (or legacy) OS that can run more complex applications. In the future, your phone could very well be your wallet, your identity, and your one and only computer, with some storage and possibly computing power extending to the cloud.

Is that when the PC will effectively be dead? No. That’s just when your phone will have evolved into your PC. That’s what Jeff Atwood really hints at:

Our phones are now so damn fast and capable as personal computers that I’m starting to wonder why I don’t just use the thing I always have in my pocket as my “laptop”, plugging it into a keyboard and display as necessary.

The “post-PC” era is really about moving past some existing form factors and into new ones. But if you could plug your phone into a keyboard and display to run Visual Studio or Lightroom or whatever, what do you really have? Well: a very small desktop computer.

We went from cabinet-sized PC to desktop PC to smaller PCs to, well, phone-sized PCs (yes, they already exist), but only with smartphones and tablets did the industry embrace new UI designs and input methods. And the danger, here (apart from the attached problems of vertically integrated services and other monopolies), is to think it’s a panacea.

I’m looking forward to see how it will continue changing the market, but meanwhile I’ll keep using my laptop. You know, the one that keeps getting lighter and thinner while gaining multi-touch trackpad capabilities. Thanks a lot for that, by the way.


This is why people buy Macs

A few months ago I set out to get a new laptop for my wife. She only had one requirement, after having shared a Macbook Pro with me for the past couple years: that it ran Windows (queue OS flamewar).

I quickly decided I wanted to give her something slick and light, and look at the new line of ultrabooks. I then narrowed the choices down to the Samsung Series 9 and the ASUS Zenbook by reading reviews online… but that was just the easy part.

IMG_8094.jpg

Much has been said already about the shopping and out-of-the-box experience of PCs, compared to that of Macs, but I think we should keep beating that dead horse until it’s underground. So keep reading for much deceased equidae action.

Getting your hands on the stuff

The first step is to actually get your hands on the models you’re looking for. But if it’s easy with a Mac – just walk into any Apple store or Best Buy or whatever – it’s not that easy with a PC.

Different stores carry different models, have different deals with manufacturers, and have different supply chains. Even given one store franchise, the availability of models will vary from store to store, and that’s not counting which ones are actually available on display: for some reason, a lot of stores wouldn’t let you touch expensive laptops like the 2 ones I wanted to try. Paradoxically, the Apple mini-store located in the next aisle would always happily provide multiple copies of each of their devices for people to play with.

In my case, London Drugs (here in Vancouver, Canada) was one of the only stores to carry the 2 models I was looking for – the whole point being to compare them side by side. But it still took me several visits to several stores all around town to find one with both units actually on display.

During my search, I got some interesting tidbits of information from sales people:

  • High-end laptops are already well taken care of by Apple, so shelf space is reserved for feature, mainstream, or budget oriented models from PC brands. Windows power-users therefore get forgotten in the process.
  • Some stores have issues securing those new ultrabooks, so they end up not having them on display at all. Unlike Apple, which provides appropriate locks for the Macbook Air, other manufacturers leave it up to the stores, who often use generic, bulky locks. But those locks often don’t work with ultra-thin cases… and when they do, they’re almost as heavy as the laptop itself, which completely ruins the experience of trying it out.

I eventually managed to test the models I was considering, and quickly decided the ASUS Zenbook would work better. Good news: the Zenbook Prime was just about to be released, with a much nicer body design and a supposedly better trackpad… so after a couple weeks waiting for that one to show up in store, I came back home one day with my precious gift.

IMG_8106.jpg

Stickers

The Zenbook is a beautiful machine, but like many PCs it comes with a bunch of ugly stickers. This is always totally baffling to me: what’s the point? Are people going to look at your laptop at Starbucks and go “I see your laptop has an Intel CPU inside! Mine too, this is so awesome! Much better than AMD, don’t you think?”.

No, all they do is waste 5 minutes of your time as you remove them and then clean up the sticky bits that they leave behind.

IMG_8103.jpg

The silver lining in this case was that all 3 stickers that come on the Zenbook are mostly in grey-ish tones similar to the laptop’s body – no flashy ugly colors here. So it’s not completely horrible. Just annoying.

Bloatware

Another expected downside of buying a PC: all the bloatware that comes pre-loaded with it. Everything, from the moderately useful to the totally useless, can be found on the Zenbook once you boot it up. I counted around 20 “utilities” installed.

Of course, I opted for the usual practice of wiping everything and installing a fresh Windows on it. I had to fight the BIOS for a long time, trying to get the default GUID Partition Table to work with my bootable USB key. But the UEFI boot mode wouldn’t work for some reason, so eventually I gave up, re-allocating the whole hard drive as a more classic Master Boot Record partition. It’s kinda sad, this being 2012 and all, but as I understand it, I’m not losing much when it comes to a single 128Gb partition.

Of course, once Windows is installed, the fun has only begun: you still have to go to ASUS’ website to get the drivers for all kinds of components, along with downloading a whole bunch of updates from Microsoft.

Conclusion

The Zenbook Prime, once you’ve been through all those hoops, is a wonderful little machine. It looks gorgeous, is very portable, and the performance is pretty good. The accessories are also nice, especially the leather-ish case for both the laptop and some of the cables. The power cable is the only poorly designed component in the whole package in my opinion, but overall, I recommend this laptop highly.

IMG_8098.jpg

But going through all these steps was pretty tedious. In comparison, the shopping and out-of-the-box experience of a Mac goes along these lines:

  • Go into any computer store.
  • Try any Macbook.
  • Give lots of money.
  • Boot it up.

I guess Microsoft Stores and Samsung Stores and the like are trying to address the issue, but it’s a long way away.


The state of Diaspora

In the main article about the road to Diaspora, we looked at setting up our own pod to interact with the Diaspora federated community. Now we’re going to look at how that actually works. Or not. Because since I set up my pod a few weeks ago, I’ve had nothing but problems.

The river of problems

To give you an idea of how bad Diaspora is, even after a couple years of development, look no further than the bug tracker on their Github project page. After a week of using my pod, I had already posted half a dozen issues – and that’s for just one user on a lonely pod. I wasn’t following a lot of people either. Some of those issues have been closed, some are still open. But the point is: this is not good for a project that’s been in beta testing for so long.

Then, there’s the performance issues. Ruby on Rails has never had a very good reputation in this department, but I thought it was mostly a “haters gonna hate” kind of reputation, or some remnant of the framework’s early days, when it was not necessarily very optimized. But after running Diaspora, I’m revisiting my opinon. On an Ubuntu VM with 512Mb RAM, the thin server that runs Diaspora is the only process that ever got killed for running out of memory – neither WordPress or MediaWiki or anything else ran into this situation. And that happens on a regular basis. We’re talking a couple times a day on average. And don’t even think of running a Rails console at the same time: it may work for a while, but you better get done quickly because one of the 2 processes will die soon enough.

And then there’s the whole “federation” aspect, which is the whole point of Diaspora. Spoiler warning: it doesn’t work. I don’t know if it’s related to the performance problems above (the process could be dropping important bits of information when it gets killed) but I always seem to be missing posts and updates from anybody else that’s not on my pod. Doing some testing between my pod and a few other well-known pods like joindiaspora.com or diasp.org, I get completely random results: sometimes a comment or “like” gets propagated in a few seconds, sometimes it takes hours, and sometimes it just doesn’t show up at all on one end or another.

I could go on and on about all kinds of little problems, from the completely stupid “PersonName started sharing with you!” email that doesn’t make any sense (because it means they’re following you, which means they won’t share anything with you unless you follow them back) to some weird design decisions around hashtags (try figuring out how to follow hashtags from other pods) to some obvious problems that never seem to get fixed (like useless Diaspora links to your Twitter cross-posts, or the inability to cross-post public posts to Facebook). I guess that’s what the Github bug tracker is for but, again, this really doesn’t look like a project that’s more that 2 years old.

Diaspora’s future

The original Diaspora founders recently left the project, saying they’re “giving it back to the community” while they “take the back seat”, a.k.a. Makr.io, a very stupid website concept. Some people say that’s the death of Diaspora, but judging from the state of it, it may actually be what saves it. The project was obviously badly managed and designed from the start, so maybe, just maybe, having a community of better programmers take over the codebase would make it viable.

There’s also the very, very slim possibility of someone either re-implementing Diaspora using a different framework than Ruby on Rails. Diaspora is mostly based on open standards – although the keyword here is “mostly” – so it would be possible to rewrite something from scratch that’s compatible.

There’s also the very, very slim possibility of someone writing something else that works better. Mike Macgirvin, the creator of Friendica, recently started working on “Project Red”, which is “Friendica taken to the next level”. His announcement didn’t hesitate to make fun of Diaspora:

Friendica WORKS today (unlike similar projects which are still struggling at basic communications after two years, and after squandering huge amounts of money).

Seeing how Friendica works and is easily installed – even though it’s ugly and useless to me – Project Red could be a good thing if it’s based on asymmetric relationships and Mike can enlist the help of a moderately talented web designer.

In the meantime, I don’t think we’ll get people away from Twitter and Facebook any time soon.


The journey to Diaspora: setting up your own pod

The first step in the journey to Diaspora is to get your own Diaspora server because, well, that’s the whole point of a distributed social network: you get to own your stuff (you could argue that, on the other hand, I’m not running my own email server, but, err, whatever, indulge me).

Unfortunately, setting up a Diaspora pod is insanely convoluted and complex.

After the jump, we’ll get into the meat of things and hopefully it will help you with the process (if you ever want to attempt it).

Follow the guide

Looking at the complexity of setting up a Diaspora pod, there’s something to be said about the Ruby community, who often seems to degenerate into a pile of over-engineered, over-complex set of weirdly named libraries and tools, all of which end up being very slow and memory hungry. Hipsters was the worse thing that ever happened to the Ruby language.

Anyway, enough trolling, you probably want to start by following the installation guides, over at Github. My server is an Ubuntu virtual machine so I followed the Ubuntu guide which will make you apt-get install a whole lot of packages, from system libraries to Ruby and the usual suspects (gem, bundler, rake, etc.). Once that’s done, you move on to the Diaspora installation guide itself, which is where things get fun (given a certain definition of “fun”), and where I’ll try to hold your hand (no homo).

Note that several “easy installer” initiatives are being worked on, mostly targeted at popular hosting companies like Heroku. By the time you read this, you may have something available for your situation.

Get an SSL certificate

The first thing you need is an SSL certificate for your domain. Since you’ll be exchanging potentially private communications between your pod and the other pods, it’s a good idea to go through https, which means getting an SSL certificate. You could usually use a self-signed certificate for this, but the bigger pods (a.k.a “community pods”) won’t accept that, so you need a proper certificate.

The documentation will point you at StartSSL, which delivers free class 1 certificates for your domain for a year. That’s enough to get you started. Just go to their website and follow the instructions – you’ll end up with a few keys and other serious-looking files. If you plan on hosting your pod on a sub-domain, don’t forget to add your top-level domain to the alternative domains falling under the certificate.

Getting Diaspora

If you keep following the installation guide, you will at that point have cloned the Diaspora repository on your server, run a few commands like bundle install, and edited a few configuration files. Here are a few tips, though:

  • Before you do all the RVM and bundler stuff, make sure you apt-get install libreadline-dev libncruses-dev. Hopefully this will make the readline gem work right away. We’ll talk about that later.
  • For some reason, the config/application.yml.example file features a pod_url with a port 3000 specified. Make sure you take this out when you replace it with your own domain URL. This has to be the real, public URL (otherwise your Diaspora ID will have :3000 in it!).
  • For me, the ca_file was /etc/ssl/certs/ca-certificates.crt, but it of course depends on your Linux distribution.

Of course, you’ll need to follow the instructions for setting up a “production” server. Don’t set serve_static_assets to true, there’s no need for that – Apache will serve the static assets already. Set single_process_mode to false, too, because it looks like it’s mostly for development/debugging purposes.

I’d recommend setting up the mailer right away (it’s the thing that sends notification emails). For some reason, even when mailer_on is false, Diaspora will try to send some emails, resulting in errors and failed jobs. It’s not a huge deal (maybe), but I like to avoid false positives in my error logs. Here it goes:

  • Install sendmail with some apt-get or something.
  • Set the mailer_method to sendmail.
  • That’s it! (although you can optionally set some nice email address for the sender).

Setting up the database

For some reason, the config files assume you’re going to run the app with the root MySQL user. I don’t know what they’re smoking over at DiasporaHQ but there’s no way I’m going to do that, so instead I created a new database myself, called diaspora_production, and a user with access to that database. You could call the database differently, I guess, but you would have to go and change the default name in all the configuration files. And I’m not sure if there’s any hard-coded stuff somewhere that references that database, so I went with the safe route of using their naming convention.

Once that’s done, you don’t need to run the db:create Rake task. Just run bundle exec rake db:migrate. You should see your database filling up with tables.

Apache configuration

Now you’ll need a web server to actually get anything working. I’m using Apache for this.

The first thing is to make sure the SSL/https stuff will be working:

  • If you don’t have that already, add NameVirtualHost *:443 and Listen 443 to your conf files to make Apache listen to incoming connections on port 443 (which is used for https). You can put that next to the similar directives listening to port 80 (which is standard http).
  • You’ll need to open port 443, obviously, in case you have a firewall (which you should have). Again, just look for where it deals with port 80, and copy/paste. If you’re using iptables, you should end up with something like -A INPUT -p tcp --dport 443 -j ACCEPT.
  • Make sure you have mod_ssl loaded/enabled.

Then, add an entry for the new website (your pod). You can use this Apache configuration that they point to in the installation guide. The only modification I did is to add a reference to the SSLCACertificateFile I got from StartSSL. So all my SSL stuff would look like this at the end:

SSLEngine On
# You generated those with StartSSL
SSLCertificateFile /path/to/cert.crt
SSLCertificateKeyFile /path/to/private_key.key
# You got those from StartSSL
SSLCertificateChainFile /path/to/sub.class1.server.ca.pem 
SSLCACertificateFile /path/to/ca.pem

For the rest, you don’t need to change much except the path to the Diaspora repository you cloned with Git. Make sure you specify the path to the public folder, not the root folder, otherwise it won’t find all the CSS and images and such.

Also note that you’ll need mod_proxy, mod_proxy_balancer and mod_proxy_http loaded/enabled for this to work.

Running the pod

Holy shit. See, what did I tell you, that it was insanely complicated? Well, it’s not quite over yet.

Now you can at last run ./script/server, wait a bit, and go visit your brand new pod. If everything’s good, you should have a little “lock” icon in your address bar (and if you click on it, it tells you your domain is verified by StartCom Ltd.). You should also see the Diaspora welcome page.

Go and signup! It’s all yours!

When that’s done, you can go back to config/application.yml and set registrations_closed to true so that nobody else can create an account (if that’s what you want).

Kill the server with CTRL+C and, while it’s down, open the Rails console with bundle exec rails c.

Remember when I talked about the readline stuff? Well, if you didn’t do it well, that’s where it fails. If so, you’ll have to patch the readline gem like I had to.

Go to ~/.rvm/src/ruby-1.9.3-p125/ext/readline and then ruby extconf.rb, make, and make install. Make sure you’re using the same version of Ruby to run extconf.rb than the version you’re patching! (if you only have one Ruby installed, you don’t have to worry about that last detail).

Running the console is for adding your newly created user to the “administrators” role, so you can get access to the administration dashboard: go to this FAQ page and look for “Roles”. You should see a bunch of stuff you need to type in the console to make yourself an administrator.

When you’re done, start the server again with ./scripts/server.

Administration tips

On your pod’s website, in the top-right menu (where you have “Contacts” and “Log Out”) you should see a new entry called “Admin”. Click on it. The most useful page here is the last one: “Resque Overview”.

Resque is the background job manager, which takes care of a lot of important stuff for your pod. The overview page will show you, among other things, any failed jobs. This is useful to troubleshoot problems and send bug reports.

Another hidden page is at /admin_panel. This is the Rails panel dashboard, where you can basically hack into your database to fix things, compare values, or clean data. It’s a bit dangerous but can be useful at times.

Conclusion

That’s it! If you made it this far, congratulations, you have absolutely no life. You should, at this point, be able to send useless status updates to your so-called friends, with the warm feeling that comes with using open standards.

In the next post in this series, we’ll look at all the problems you will face with your Diaspora pod – and there are many, unfortunately.


The journey to Diaspora

Recently, app.net has gotten a lot of attention, but I just don’t see the appeal. It’s basically a Twitter clone that you have to pay, and all of this for what? So that the API is nicer to developers and you don’t see a couple of “promoted tweets” once in a while?

It sounds like a very shallow goal for a supposedly “disruptive” communication platform. Sure it has some kind of grand plan to get us to the next level of connectedness through, err, innovative apps and mashups or something. But it doesn’t make things better on the ownership level. It’s still yet another data silo. And I’m fed up with silos.

Remember when we used to communicate through free, open, distributed and standardized protocols? You know, like emails or phone calls? Or even snail mail? My problem with Facebook or Twitter is not that I’m the product, but that I don’t own my data, and that there’s no competition or choice between service providers. They’re not only data silos, but business model silos.

Other people also got fed up with the slow decline of the open and connected web, and started writing the building blocks for distributed social networks. Several projects emerged from this, with the most famous being Friendica and Diaspora.

  • Friendica: its main advantage is that it’s easy to install – like most PHP projects, you just copy files on your server, edit a configuration file, and follow the instructions on the web. You just have a couple of additional steps after that to setup your crontab so it can synchronize posts in the background. It also has very good interoperability with pretty much all the other social networks. The big problem, however, is that it’s batshit ugly. Also, as far as I can tell, it only allows bi-directional relationships, like Facebook, which means that if you want to “follow” somebody, you’ll have to send a “connection request”.
  • Diaspora: announced with much fanfare a while ago, it felt like vaporware for the longest time until it actually looked like they were shipping something. The selling point is a Google+-like “friendship” model, where relationships are asymmetrical and can be classified with “aspects” (Diaspora’s version of Google+’s circles). However, as we’ll see in the next post in this series, it’s insanely complex to install, full of bugs, and has poor performance. It also only allows to cross-post to select competitors (Facebook, Twitter and Tumblr).

With that in mind, Diaspora doesn’t seem to me like the best alternative to commercial social networks – it’s the only one so far. In this journey to Diaspora, we’ll look at:


Microsoft password fail

Almost 4 years ago, I wrote a short article on dumb websites who have a maxiumum password length.

Now, in 2012, there are still websites with such stupid policies. One of the most famous is none other than Microsoft’s Live Account service, which serves as the authentication hub for all things Microsoft. Basically, your Live ID, or whatever it’s called, can’t have a password longer than 16 characters.

Microsoft is, rightly so, getting a lot of criticism about that because the recently released Windows 8 lets you link your Windows user to a Live ID, for use with the whole Windows App Store thing and more. This means, in most cases, that your Windows password can’t be longer than 16 characters, and, in other cases, that you have all kinds of weird account related bugs and you may need to enter a truncated password.

Oh, Microsoft.


When Windows “just works”: part 2 (the work around)

Back in the first part of this 2-part post we looked in some detail at how MacOS mounts network shares, and how badly designed this feature is compared to its Windows counterpart.

We’ll now look at the solution I’m using to fix the problem, which is to mount network shares in a consistent way for a multi-user machine.

The work around

Thanks to the power of UNIX, you can get around the problem by spending hours reading boring documentation, searching useless forums, editing obscure configuration files and generally speaking wasting your time for something Windows gets right in 2 clicks.

If you do the research, you will find lots and lots of solutions – some clever and some completely stupid. Not all of them worked for me, so here’s the solution I came up with. It uses the autofs feature supported by most recent versions of MacOS, and has some pros and cons. “Your mileage may
vary
”, as they say.

Creating custom mount points

Start by editing /etc/auto_master. It should look like the following, but without that last line. Add it using your favorite text editor (which you may have to run as root):

#
# Automounter master map
#
+auto_master		# Use directory service
/net			-hosts		-nobrowse,hidefromfinder,nosuid
/home			auto_home	-nobrowse,hidefromfinder
/Network/Servers	-fstab
/-			    -static
/-              auto_smb

Now create /etc/auto_smb and write something like this:

/Users/user1/Volumes/MyStuff -fstype=smbfs ://username:password@server/MyStuff
/Users/user1/Volumes/media   -fstype=smbfs ://username:password@server/media

/Users/user2/Volumes/SomeOtherStuff -fstype=smbfs ://username:password@server/SomeOtherStuff
/Users/user2/Volumes/media          -fstype=smbfs ://username:password@server/media

This file tells the automounter to treat the folders specified at the beginning of each line as mount points for the network shares specified at the end of each line. The file protocol I’m using is smbfs, which is the Windows file-protocol. You could try to use afp here, which is the Apple File Protocol, but I ran into permission problems on files I was editing or creating on my server. Also, depending on your server, smbfs may be faster.

I’m mounting my network shares in each user’s home directory, in a ~/Volumes folder, but you can mount them anywhere else – the whole point here is to have one “mount sandbox” per user instead of the system-wide /Volumes.

Also, here I’m specifying network shares with a custom username and password. You may not need that if you’re using the same credentials on both the local machine and the server. Of course, replace server with your actual server name or IP.

Once that’s done, run sudo automount -vc to tell the automounter to refresh. If it doesn’t work, you may have to create the ~/Volumes directory, along with the mount points themselves. I also ran into problems where the automounter would still be a bit confused and had to reboot to make things work, so you can try that as a last resort.

Accessing the network shares

That’s the totally lame part: the mount points you created are not displayed in Finder until they are actually mounted. They will show up only after they’re mounted, which only happens when needed, i.e the first time they are accessed. So, at first, your ~/Volumes directory will be empty – at least in the Finder (if you go there using a Terminal, you will see the mount points, and you can cd into them, which would trigger the mount and make them appear in Finder! Yeah, it’s confusing, I know, welcome to the Apple ecosystem).

How do you access your network shares if you can’t see them in the Finder, then? Well you go directly to them with “Go > Go to Folder…” and enter, say, ~/Volumes/media:

You would only have to do this once in a while because once they are mounted, you’re unlikely to unmount them (you would actually need to use the command line for that). Also, once you gave that path to an application like iTunes or Lightroom, those applications can trigger the mount themselves by merely accessing the path: the next time you reboot and launch the application, it will all magically work (although you may notice a pause for a second or two as the system mounts the network share).

Alternatives

If you want the network shares to be visible in Finder as soon as you login, you can either:

  • Make a login item (a Bash or Apple script, for example) that will trigger the mount by accessing the path somehow.
  • Specify mount points in /etc/fstab instead of through the automounter.

I myself prefer staying with the automounter, because it means I won’t get a timeout when I log into my laptop but I’m not at home. Since I rarely reboot, it’s very uncommon for me to have to manually re-mount my network shares.

Conclusion

This is what makes it possible for me to use my home server along with sharing my Macbook with other users. It was a lot more complicated than I anticipated, but I guess making Macbooks play nice with custom servers would go against Apple’s new cash cow, a.k.a “iCloud”.


When Windows “just works”: part 1 (the problem)

If you asked me a year ago what was the most awesome feature that Windows has and that MacOS doesn’t, I would have probably scratched my head for a bit, mentally sorting through all the obscure advanced things you can do with the Windows SDK and a few lines of code, or all the little things that make organizing files so much easier than with the horrible Mac Finder.

But if you ask me now, I’ll reply straight away this: mapped network drives.

You would think there’s not much to it, but this has been my biggest problem as a user when I switched to a Macbook Pro as my main machine. In the first part of this 2-part post, we’ll look at the problem. Go to part 2 for the solution I’m using.

WTF is a mapped network drive?

In case you don’t know, here’s a little crash course in mapped drives. You can skip ahead if you’ve already used them.

Windows famously uses drive letters for each physical or logical drive hooked up to your computer. This means in most cases you would access your main system drive as C:, some secondary drive as D:, a USB thumbdrive as E:, etc. However, you can also be part of a network, and this means you would access a server’s shared directory as \MyServersharename.

To make things simpler, you can map a server’s shared directory to a new drive letter like Z:. This is not an advanced feature. In fact, it’s always right there in the explorer’s toolbar when you open “My Computer”.

By the way, you can see here my typical setup at home, where I mount 3 shares from my file server: one for my personal data, one for the family data like pictures and home videos, and one for miscellaneous media like movies or music.

Setting up a mapped network drive takes about 2 clicks and a few keystrokes:

  1. Click on “Map Network Drive”.
  2. Type your server’s name and share name.
  3. Optionally enter specific credentials if the share is protected.
  4. Done!

At this point, you have a simple and convenient access to your server in a way that’s totally transparent to any application you may run. You can for example run a media manager like iTunes or MediaMonkey and tell it to look for music in Z:Music. It will always work – at least as long as you checked the “Reconnect at logon” option, and you’re of course connected to your network.

Doesn’t MacOS have something similar?

It does. Kinda.

When you’re connected to your network, you will see the available servers in the “Shared” sidebar of the Mac Finder:

My server here shows up multiple times because it exposes several file protocols that MacOS understands (comparatively, out of the box, Windows only understands its own file protocol and not the Unix or Apple ones).

Once you click on a server, you can browse its shared directories that are available to you, and when you dive into one, it will “mount” it on our file-system. You can tell it’s mounted because the Mac Finder will show an “eject” button next to each mounted share, and next to the server itself. Here I mounted 2 shares:

Just like on Windows, the server share is mounted in a way that should be transparent to applications. So it should all work out fine, right?

Well, no.

Under the hood

MacOS is based on a UNIX-like architecture, so there are no drive letters involved here – only mount points that look like directories on the local file-system.

Sure enough, if you open a Terminal and go look into /Volumes, you’ll see the mount points for the stuff I mounted in the previous screenshot:

My media folder on my file server is mounted as /Volumes/media, and if I point an application like iTunes to it, that’s the actual path it will use behind the scenes.

The problem of multiples

What happens if another user (say, my wife) logs in and also wants to mount the media folder to listen to some music? Or what happens if you want to mount another share called media from another server?

Well, MacOS, just like any other UNIX-based system out there, does something very stupid: it appends a number at the end of the mount point:

My wife’s media folder is, under the hood, mounted as Volumes/media-1. Other attempts at mounting something with that name will result in Volumes/media-2, Volumes/media-3 and so on.

Of course, this all depends on who mounts things first. If I was the one who logged on to the Macbook later, I would have been assigned media-1 when my wife would have media.

Compare that to Windows’ mapped network drives, which are user-specific, which means there’s no problem as long as you assign the same drive letter to the same network shares – which is a trivial thing to do.

Bad things happen

This is where things break down rapidly.

Did you originally, and unwillingly, tell iTunes your music was in /Volumes/media? Well, now your whole library is empty because it can’t access that directory – it either belongs to one of the other users, or it has been unmounted. Worse, it could actually point to a completely different share that you mounted from another server because it has the same name.

The same thing happens with your Lightroom pictures, your documents or presentations with embedded assets, and any other program that stores paths to some type of data you may have on a file server. All of a sudden, life is not worth living anymore.

The cynical in me tells me Apple doesn’t care about you if you’re sharing your laptop with someone else anyway – they’re not interested in cheap people – but it could actually happen even if you’re the only user: if there was a problem with the server, or with the network connection, well, let’s just say MacOS will be very happy to give up trying to mount your network share at its original mount point, and will instead do it with a new mount point, again with the number suffix. So you could end up with /Volumes/media-1 even if you’re the only user logged in.

Keep reading to part 2 for my work-around.