The Stochastic Game

Ramblings of General Geekery

Tough Time for Honest Comics Readers

JManga, a digital manga service created less than 2 years ago by 39 of the biggest publishers in Japan, is shutting down in a couple months. Most cloud services related fears became a reality when it was clear no refund or backups would be offered. Check out their “Urgent Message” for more details, but believe me when I say it can’t get any worse:

It is not possible to download manga from My Page. All digital manga content will no longer be viewable after May 30th 2013 at 11:59pm (US Pacific Time)

Everybody then wondered what would happen if ComiXology went down. And funnily enough, just the day before, ComiXology had experienced a massive blackout which left people unable to read any issues they didn’t have in their cache.

Rich Jonston from Bleeding Cool concludes:

This is the moment when the real winners are comic stores… and pirates.

As I said before, and as many others said before me: own your data. Cloud services are fine by me as long as there’s a way to easily backup my stuff on my file server, thank you very much.


The Death of Google Reader

After the infamous announcement that Google was shutting down Google Reader, there was a lot of debates around the use of online services, especially free ones, and whether we can trust a company to keep such services up indefinitely.

Of course, nothing can last “indefinitely”, and probably nothing will last until you die. You have to expect that Gmail, Facebook, iTunes, Amazon Kindle and any other service you’re currently using won’t last for more than, say, 20 years (and that’s being generous). You need to plan accordingly.

Marco Arment sums this up on his blog:

Always have one foot out the door. Be ready to go.

This isn’t cynical or pessimistic: it’s realistic, pragmatic, and responsible.

That’s what I’ve always tried to do. I choose programs, services and products that don’t take my data away from me. It makes it easier to switch to something else if I need/want to, and it future-proofs what I spend money on.

But that’s where it gets interesting, because Google Reader was pretty open to begin with. Google may have become this data hoarding and privacy raping monster over the years, but one thing they always had going for them was the Data Liberation initiative. With it, you effectively always had one foot out the door. You could, at any moment, download a list of all your subscriptions in an open, standardized format, along with a collection of all your stars, comments, and shares. You may be disrupted for a while because you would need to adapt to a new feed reader, but you could switch, just like you can switch text editors or operating systems.

What’s wrong with the Google Reader situtation has nothing to do with your data, or with using a free service (although that is an important subject too). What happened is that Google Reader became a lot more than a free online feed reader. It became a single choke point for virtually every feed reader or news aggregator in the world. Google is of course to blame for making it a collateral damage of their social-wannabe delusions, but we are equally to blame for letting all those programs like Flipboard, Pulse or NetNewsWire rely on a single service that was never really intended to be used that way. It’s understandable it ended up this way, because relying on Google Reader meant easier adoption for new users and not having to worry about complex problems like data storage, metadata syncing, and interoperability… but it doesn’t make it the right decision either. We are to blame because we were constantly asking for Google Reader support. It became a feature you couldn’t ship without.

The death of Google reader is not about losing a product we love – it’s about breaking an entire eco-system of products and workflows.

Hopefully, we’ll recover and things will be better, but it does bring up another debate: the one about how we rely so much on other single choke points like Twitter and Facebook. Ideally, everything should be federated, like email, but I tried looking at distributed alternatives, and they’re just not working well enough. If the links you share and photos you post are of any value to you, I’d suggest you start looking at data harvesting solutions. I know I am.


The Problem with iOS

These past few months I’ve seen a fair number of articles about people who switched from iOS to Android. Most of those articles talk about the differences between the 2 operating systems, and how some of those differences proved to be significant enough for whoever was switching: multitasking, notifications, the so-called “open vs. closed”, etc. That’s fine, but these specific bullet point list vs. bullet point list comparisons seem to be missing the higher level view of what’s really going on: iOS is just not working the way it should anymore.

When iOS was released in 2007, it made a lot of things easier to do on a phone, compared to the existing competitors. A lot of tasks just felt “right”, or at least “way better”. But it basically stayed there since then. The only real additions, like the notification center or iCloud, are either incredibly badly designed, or are only usable if you have an almost exclusive Apple-based household (which you shouldn’t). When I look at my iPad now, I feel like I’m looking at a computer that hasn’t been updated in years.

If I’m reading something in Safari, I should be able to quickly send it to Pocket, Buffer, Dropbox, or whatever other service I choose to base my workflow on. But on iOS, the only “sharing” services I have access to are those that Apple thinks I should use. For everything else, I need to copy the URL, switch to another app, and find where to paste it. Why?

If I added a few articles to read on Pocket late yesterday evening, I should have them ready on my phone or tablet the next morning so I can read them during the commute. But on iOS, I have to remember to open the Pocket app so it can sync. Why? (note that other apps like Instapaper try to use the Location Notification API but frankly, by the time I leave the house, and therefore lose any WiFi connectivity, it’s already too late).

If I just bought a bunch of PDFs from DriveThruRPG, e23, Arc Dream or Pelgrane, or some comicbooks from Panel Syndicate or Thrillbent, or whatever else I want from whoever I feel like giving money to, I should be able to just download those files onto my device and open them with any application I like. But on iOS, unless it’s coming from iTunes (directly or via in-app purchases, all of which also means 30% of my money doesn’t even go to the people I want to give my money to), I have to jump through many hoops, going first through Dropbox or FileBrowser and then having to re-transfer every. single. file. to each of their application’s sandbox storage. Why? Oh why?

You may not be surprised to learn that all of those things “just work” on Android. And those are just examples of the frustrations I’ve had with my iPad in the past few weeks. You don’t just go “BOOM!” anymore – quite the contrary: you actually have to work harder to make things happen.

Some people might be tempted to boil this down to a simple list of features, like “iOS needs a background service API, shared storage, and more extensibility points for 3rd party apps”. This sounds to me like going back to 2007 and saying “Windows Mobile needs to make the stylus optional, have a grid of icons on the home page, and remove copy/paste”. Really, it’s not just about a bullet point list of features. It’s about the whole philosophy of the system, and of the company behind it. And although Apple seems to be occasionally bowing down to the pressure of the market, like when it released the 7″ iPad Mini, I don’t expect it to change this radically on the design of iOS.

Would I rather have an iPad Mini instead of my Nexus 7 to bring with me everywhere I go? Of course I would. The iPad Mini is thinner, lighter, and has a better screen. But as someone famous once said, “[design is] not just what it looks like and feels like. Design is how it works”. And for me, iOS is just not working well enough. And it never did, really – it’s just that until the competition caught up with the basics, only a minority of users noticed it.


PieCrust 1.0 RC

The past month has been pretty busy, between my next secret project, my day job, and of course fixing PieCrust bugs. But somehow among this chaos seems to be emerging a release candidate for PieCrust 1.0. And it’s only fitting that I announce this on Pi Day!

P365x52-73: Pi(e)

As always, for a complete list of changes, I’ll redirect you to the changelog. But for the highlights, please read on.

Big thanks go to the few people who contributed patches to the PieCrust code, and to the many who reported bugs and had the patience to help me fix them.

Breaking changes

First, the breaking changes. There are a bit more than I’d like, but most of them should not be a problem to 99% of users:

  • Chef’s command line interface has changed: global options now need to be passed first, before the command name. So for example, if you want debug output when baking, you need to type chef --debug bake.
  • The pagination.posts iterator can’t be modified anymore (i.e. calls to skip or limit or filter will fail). You can use the blog.posts iterator instead to do anything custom.
  • The xmldate Twig filter has been renamed to atomdate.
  • There was a bug with the monthly blog archives (accessed with blog.months), where they would be incorrectly ordered chronologically. They are now ordered reverse-chronologically, like every other list of posts.
  • The baker/trailing_slash is now site/trailing_slash, since PieCrust will also generate links with a trailing slash in the preview server, and not just during the bake, when that setting is enabled. The old setting is still available, though.
  • The asset template variable is renamed assets. The old name is still available.
  • Specifying a link to a multi-tag listing page is now done with the array syntax: {{pctagurl(['tag1', 'tag2'])}}. The previous syntax quickly broke down as soon as somebody decided to have tags with slashes in their name 🙂

All those changes should give you an error message that’s easy to understand, or have backwards compatibility in place with a warning telling you about the change. Look out for those.

Sass, Compass and YUI Compressor

Previously available as plugins, the Sass, Compass and YUI Compressor file processors are now part of the core. There were enough people mentioning those tools, especially Compass, that it made sense to include them by default.

The Sass processor is very similar to the one previously available in the plugin. In the site configuration, you can specify include paths with sass/load_paths, output style with sass/style, or any custom option to pass to the Sass tool with sass/options.

Compass support, however, has changed quite a bit, and should be now a lot better:

  • You enable it by setting compass/use_compass to true. This will prevent the default Sass processor to run on your .scss files.
  • If .sass or .scss files are found in the website, the compass tool will be run at the end of the bake. It will by default use any config.rb found at the root of the site. You can otherwise specify where your Compass config is with compass/config_path, or ask PieCrust to auto-generate it for you with compass/auto_config to true.
  • It may be a good idea to add your config file to the baker/skip_patterns list, so that it’s not copied to the output directory.

To enable the YUI Compressor to run on anything that outputs CSS, specify the path to the .jar file with yui/compressor/jar.

Linking feature now official

For a while, there was a link template variable that let you access other pages in the content tree. It was however never really official since I was still iterating on the design.

It’s now official, and available through the siblings template variable. It will return the pages and directories next to the current page.

To return the whole family tree starting from the current page, you can use family. It’s like a subset of site.pages.

Auto-format extensions

Another popular request is the ability to use different file extensions for pages and posts, like .md for Markdown content or .textile for Textile content.

This is now possible with site/auto_formats. This is a list that maps an extension to a format name:

site:
    auto_formats:
        md: markdown
        mdown: markdown

Here I’m mapping *.md and *.mdown to the Markdown format. Files found with those extensions will be treated as if they were .html files, but will also have their format set to markdown.

Feed preparation

If you write a blog, you most probably want to have an RSS feed. You can have one prepared for you with: chef prepare feed myfeed.xml. It will create a new page that has most of what you want by default. You can then go and tweak it if you want, of course.

Miscellaneous

A few other important changes:

  • All libraries (including Twig, Markdown or Textile) have been upgraded to their latest versions.
  • It is now possible to specify posts_filters on a tag or cateogory page (_tag.html or _category.html).

PieCrust on Heroku

When I first decided to work on PieCrust, I settled with PHP as the language – even though it mostly sucks – in an attempt to make it broadly available. Anybody who runs a blog on WordPress should be able to switch and enjoy the perks of plain text data without needing to install and learn a whole new environment.

That doesn’t mean PieCrust can’t also be used in the nerdiest ways possible. A while ago we looked at how cool it is to update your website with Git or Mercurial, and today we’ll look at how you can host it on Heroku, which incidentally also supports Git-based deployment.

Today's latte, heroku.

If you already know how Heroku works, then the only thing you need is to make your app use the custom PieCrust buildpack. Skip to the end for a few details about it.

For the rest, here’s a detailed guide for setting up your PieCrust blog on Heroku, after the break.

1. Sign up and setup

This is pretty obvious but it’s still a step you’ll have to go through: sign up for a Heroku account and install their tools. Follow the first step to login via the command line, but don’t create any app just now.

2. Create your PieCrust website

For the sake of this tutorial, let’s start with a fresh new site. You will of course be able to use an existing one, the steps would be very similar.

Let’s create one called mypiecrustblog:

> chef init mypiecrustblog
PieCrust website created in: mypiecrustblog/

Run 'chef serve' on this directory to preview it.
Run 'chef bake' on this directory to generate the static files.

Let’s also add a post, just to be fancy:

> chef prepare post hello-heroku
Creating new post: _content/posts/2012-12-03_hello-heroku.html

Last, turn the site into a Git repository, make Git ignore the _cache directory, and commit all your files:

> git init .
Initialized empty Git repository in /your/path/to/mypiecrustblog/.git/
> echo _cache > .gitignore
> git add .
> git commit -a -m "Initial commit."

By the way, you can quickly check what the site looks like locally with chef serve. We should be able to see the exact same thing online in a few minutes when it’s running on Heroku.

3. Create your Heroku app

Now we’ll turn our site into a Heroku app. The only difference with the documentation on the Heroku website for this is that we’ll add an extra command line parameter to tell it that it’s a PieCrust application:

> heroku create mypiecrustblog --buildpack https://github.com/ludovicchabant/heroku-buildpack-piecrust.git
Creating mypiecrustblog... done, stack is cedar
BUILDPACK_URL=https://github.com/ludovicchabant/heroku-buildpack-piecrust.git
http://mypiecrustblog.herokuapp.com/ | git@heroku.com:mypiecrustblog.git
Git remote heroku added

What’s happening here is that, in theory, Heroku doesn’t know about any programming language or development environment – instead, it relies on “buildpacks” to tell it what to do to set up and run each application. It has a bunch of default buildpacks for the most common technologies, but it wouldn’t know what to do with a PieCrust website so we need to provide our own buildpack, with that --buildpack parameter.

If you already created you app previously, you can also make it a PieCrust application by editing your app’s configuration like this:

heroku config:add BUILDPACK_URL=https://github.com/ludovicchabant/heroku-buildpack-piecrust

We can now push our website’s contents to Heroku:

> git push heroku master
Counting objects: 3, done.
Writing objects: 100% (1/1), 185 bytes, done.
Total 1 (delta 0), reused 0 (delta 0)

-----> Heroku receiving push
-----> Fetching custom git buildpack... done
-----> PieCrust app detected
-----> Bundling Apache version 2.2.22
-----> Bundling PHP version 5.3.10
-----> Bundling PieCrust version default
-----> Reading PieCrust Heroku settings
-----> Baking the site
[   171.7 ms] cleaned cache (reason: not valid anymore)
[    46.4 ms] 2012/12/03/hello-heroku
[    21.3 ms] [main page]
[     2.2 ms] css/simple.css
-------------------------
[   247.3 ms] done baking
-----> Discovering process types
       Procfile declares types    -> (none)
       Default types for PieCrust -> web
-----> Compiled slug size: 9.5MB
-----> Launching... done, v7
       http://mypiecrustblog.herokuapp.com deployed to Heroku

To git@heroku.com:mypiecrustblog.git
   1180f39..e70c271  master -> master

At this point, you should be able to browse your website on Heroku (http://mypiecrustblog.herokuapp.com in our case here).

You now just need to keep adding content, and git push to make it available online.

Appendix: The PieCrust buildpack

The PieCrust buildpack we’re using in this tutorial will, by default, bake your website and put all the generated static files in the www folder for the world to enjoy.

If, however, you set the heroku/build_type site configuration setting to dynamic, it will copy the PieCrust binary (a .phar archive) to your app’s folder and create a small bootstrap PHP script that will run PieCrust on each request. This would make deployments very fast, as you won’t have to wait for the website to re-bake, but it’s highly recommended that you use a good cache or reverse proxy for anything else than test websites.

Note that the version of PieCrust that’s used by the buildpack is, by default, the latest one from the development branch (default in Mercurial, master in Git). You can change that with the PIECRUST_VERSION environment variable. For example, to use the stable branch instead, you can do:

> heroku config:add PIECRUST_VERSION=stable

For more information about the buildpack, you can simply go check the source code over on Github.


Themes in PieCrust

I just pushed a lot of changes to the dev branch of PieCrust, including the new support for themes. The point of themes is to make it easy to change your website’s appearance by further separating content and look.

Here’s an early look at how themes work, so that anybody can play with it and provide feedback. Not everything is in place yet, so now’s the best time to affect the design.

The theme folder

When a website is using a theme, that theme will be placed in the _content/theme folder.

The theme itself is really just another PieCrust website: it has its own _content folder with pages and templates and everything you expect. The only differences are:

  • The configuration file is named _content/theme_config.yml instead of _content/config.yml. This is so chef is not confused as to what’s the root of the current website when you go into the theme folder.
  • A theme should have a theme_info.yml file at the root of the theme. It’s a YAML file with, at minimum, a name and description.

The theme behaves as follows:

  • Pages defined in the theme (in _content/pages, like any other PieCrust website) are added to the base website, unless a page with the same URL has been defined there. It makes it possible for themes to define pages like an “About” page or some blog archives. They effectively complement the website on which the theme is applied.
  • The theme’s templates directories (site/templates_dirs setting) are added before the default _content/templates directory, but after any other custom directory defined by the website. It makes it possible for themes to override the default templates, and for users to override a template defined in a theme.

The themes command

A new themes command is available in chef. It looks a lot like the plugins command in the sense that is offers the same 3 sub-commands: info, find and install. Right now, however, there are no themes in the default repository, so chef themes find won’t return anything, which means there’s no theme to install.

You could setup a local repository by setting site/themes_sources to /path/to/my/themes, where that path contains one or more themes in sub-directories. You can then install one of your local themes by running chef themes install <name>, which basically just copies the theme inside _content/theme.

For faster development, however, I would recommend just sym-linking your theme’s root directory to _content/theme.

Quickstart

To summarize, here’s how you can write a theme for PieCrust at the moment:

  • Create a PieCrust site as usual.
  • Rename _content/config.yml to _content/theme_config.yml.
  • Add a theme_info.yml file at the root. This is a YAML file, just like
    config.yml.
    • Give it a name.
    • Give it a description.
  • Write pages, templates, CSS stylesheets and so on.
    • It’s a good idea to implement standard pages like _index.html,
      _tag.html and _category.html.
    • It’s also good to implement standard templates like default.html and
      post.html.

When you’re happy, symlink – or copy – the theme’s directory to a website’s _content/theme directory. Play around by adding new pages and posts to that site.

As always, report issues on the BitBucket or Github issue trackers.


Todon’t

Jeff Atwood posted another one of his controversial, opinionated articles on his blog, this time about To-Do lists. It’s a long rant about the failure of To-Do lists.

As a former Remember The Milk user and fan, I can totally relate. I just stopped using To-Do lists altogether a couple years ago. I just didn’t need them anymore – I knew what I needed to do most of the time:

If you can’t wake up every day and, using your 100% original equipment God-given organic brain, come up with the three most important things you need to do that day – then you should seriously work on fixing that.

But the truth is: I’m still using some form of list, especially since Trello came around. It took me a while to realize the difference between the Getting-Things-Done-ish productivity I tried to achieve in the past, and the more zen-like – and effective – process I have now.

Patrick Rhone eventually wrote it for me:

[…] increasingly, my to do list is full of the things I park there that otherwise get in the way of what I’m actively focused on.

My To-Do lists (hosted on Trello) are filled with stuff I don’t want to do right now, but need to remember for later. I know what I need to be working on right now, but I may forget about stuff I may want to do later. It’s more like a notebook than a To-Do list, really, but the To-Do list format makes it easier to cross things out if they become invalid or if I already did it.


The post-PC era

You can see this kind of headline all over the web these days, especially with Apple fanboy tech bloggers: the PC is dead, all hail tablets and smartphones. The argument is also made for video games consoles, who are supposedly on the way out to be replaced by, guess what, tablets and smartphones. Even Jeff Atwood is getting on the bandwagon.

I don’t disagree with the facts here: most indicators we have on the market right now show that, indeed, desktop and laptop computers have declining sales while mobile products have an ever-accelerating growth.

Untitled

Some tech bloggers, however, are a bit too quick to equate opposing trends with replacement – in reality, people still own PCs and Macs, but complement them with mobile devices. As far as I know, there’s no evidence that anybody is actually getting rid of their laptops and desktop computers after buying an iPad… yet.

Main device of choice

Interestingly enough, Jeff Atwood recently posted an article about the ASUS Zenbook Prime (which, incidentally, is the laptop I bought my wife) as being “the last laptop he may ever buy”. He argued that, more often than not, he would choose a tablet or smartphone when deciding what device to use or take with him, leaving the laptop behind (which sounds dubious from a guy who must spend most of his time in Visual Studio, but whatever).

This, indeed, points at actual replacement. But look at the way he supposedly makes his decision:

  • Want 10 hours of real world battery life?
  • Want to start doing stuff immediately?
  • Want the smallest most portable device you can get away with?
  • Want to be always connected to the Internet?
  • Want easy access?

Those are never the questions I have in mind when deciding what to put in my backpack, or what to take with me around the house. Those are just specs. Some are laughable (seriously, you can’t wait 2 seconds for the laptop to resume from sleep?), and some are kind of obvious (well yes, I won’t take anything too bulky or impractical on the bus, but that’s as applicable to a laptop as it is to a large book, my drawing equipment, or my bass guitar, which I would otherwise very like to take with me always), but they’re all besides the point.

Instead, there’s only one question I ask myself: what do I want to do? And then just take what lets me do exactly that, given the environment in which I intend to do it.

For me, this is almost always the laptop – and I’m not the only one. Some of it is my own personal tastes and workflows (the need for a real keyboard, a big enough screen, a connection to my file server), some of it is because of the software I need to run (Lightroom, Visual Studio or any other development environment, Mercurial or Git, Pro Tools), and some of it is the necessities of the task at hand (using a Wacom tablet or Cintiq, using a high-speed audio interface with several music instruments and microphones plugged-in). All of this basically prevents me from using a tablet even if I wanted (which I don’t really).

What goes in the bag?

The only task for which I consistently pick the tablet is consuming content, like catching up with my “to read/to watch” list, going through social networks and feeds, and enjoying comicbooks.

It’s also the only device I take with me to bed or when leaving the house. Is that a shortcoming of laptops then? Absolutely not. I never planned to write code or tag photos or record music in a crowded subway before, and that hasn’t changed. What I would plan to do is read something, and that’s why I always carried a couple books with me all the time.

Now I just carry one small tablet. It’s not because it’s small or because it has better battery life or because it turns on right away. I have it because it’s the best digital replacement for books (I don’t carry a Kindle or other e-book reader because I tend to read a lot of comics or other illustrated books like RPG books).

For me, the tablet is a definitive improvement (less physical books cluttering the house, lighter and smaller backpack, more pleasant reading experience), but it has barely replaced anything I do on the laptop. The only thing it did replace is Google Reader/Facebook/Twitter in a browser.

I also hear a lot of people saying a phone or tablet is better for quickly checking emails, or looking up something on the web, or whatever. Although I did use my mobile devices for that at the beginning, I found that it was not that much of an improvement. This is because:

  • I stopped compulsively checking my emails and notifications through the day. Now I’m only doing that a few times a day, which means that when I do I will probably have several things to delete/archive/reply to, and that’s just quicker to do on the laptop (you do use keyboard shortcuts, do you?).
  • I have a lot of productivity shortcuts set up on my system. I can look something up on a map or on Wikipedia or IMDB or whatever in less time on the laptop than on my iPad. I know, I actually tested that.

One of the exceptions is looking up recipes. My wife got me a fridge stand for the iPad for when I’m cooking. It’s also easier to clean grease marks on a screen than on a keyboard. But, again, that didn’t replace using a laptop: before I would have a recipe book open on the counter.

The post-PC era

Now, we can try to imagine how this is going to change in the next few years. I wouldn’t be surprised if, at some point, your phone could be paired with an external display, on top of a mouse and keyboard, and get into a mode where it’s running a full-fledged (or legacy) OS that can run more complex applications. In the future, your phone could very well be your wallet, your identity, and your one and only computer, with some storage and possibly computing power extending to the cloud.

Is that when the PC will effectively be dead? No. That’s just when your phone will have evolved into your PC. That’s what Jeff Atwood really hints at:

Our phones are now so damn fast and capable as personal computers that I’m starting to wonder why I don’t just use the thing I always have in my pocket as my “laptop”, plugging it into a keyboard and display as necessary.

The “post-PC” era is really about moving past some existing form factors and into new ones. But if you could plug your phone into a keyboard and display to run Visual Studio or Lightroom or whatever, what do you really have? Well: a very small desktop computer.

We went from cabinet-sized PC to desktop PC to smaller PCs to, well, phone-sized PCs (yes, they already exist), but only with smartphones and tablets did the industry embrace new UI designs and input methods. And the danger, here (apart from the attached problems of vertically integrated services and other monopolies), is to think it’s a panacea.

I’m looking forward to see how it will continue changing the market, but meanwhile I’ll keep using my laptop. You know, the one that keeps getting lighter and thinner while gaining multi-touch trackpad capabilities. Thanks a lot for that, by the way.


This is why people buy Macs

A few months ago I set out to get a new laptop for my wife. She only had one requirement, after having shared a Macbook Pro with me for the past couple years: that it ran Windows (queue OS flamewar).

I quickly decided I wanted to give her something slick and light, and look at the new line of ultrabooks. I then narrowed the choices down to the Samsung Series 9 and the ASUS Zenbook by reading reviews online… but that was just the easy part.

IMG_8094.jpg

Much has been said already about the shopping and out-of-the-box experience of PCs, compared to that of Macs, but I think we should keep beating that dead horse until it’s underground. So keep reading for much deceased equidae action.

Getting your hands on the stuff

The first step is to actually get your hands on the models you’re looking for. But if it’s easy with a Mac – just walk into any Apple store or Best Buy or whatever – it’s not that easy with a PC.

Different stores carry different models, have different deals with manufacturers, and have different supply chains. Even given one store franchise, the availability of models will vary from store to store, and that’s not counting which ones are actually available on display: for some reason, a lot of stores wouldn’t let you touch expensive laptops like the 2 ones I wanted to try. Paradoxically, the Apple mini-store located in the next aisle would always happily provide multiple copies of each of their devices for people to play with.

In my case, London Drugs (here in Vancouver, Canada) was one of the only stores to carry the 2 models I was looking for – the whole point being to compare them side by side. But it still took me several visits to several stores all around town to find one with both units actually on display.

During my search, I got some interesting tidbits of information from sales people:

  • High-end laptops are already well taken care of by Apple, so shelf space is reserved for feature, mainstream, or budget oriented models from PC brands. Windows power-users therefore get forgotten in the process.
  • Some stores have issues securing those new ultrabooks, so they end up not having them on display at all. Unlike Apple, which provides appropriate locks for the Macbook Air, other manufacturers leave it up to the stores, who often use generic, bulky locks. But those locks often don’t work with ultra-thin cases… and when they do, they’re almost as heavy as the laptop itself, which completely ruins the experience of trying it out.

I eventually managed to test the models I was considering, and quickly decided the ASUS Zenbook would work better. Good news: the Zenbook Prime was just about to be released, with a much nicer body design and a supposedly better trackpad… so after a couple weeks waiting for that one to show up in store, I came back home one day with my precious gift.

IMG_8106.jpg

Stickers

The Zenbook is a beautiful machine, but like many PCs it comes with a bunch of ugly stickers. This is always totally baffling to me: what’s the point? Are people going to look at your laptop at Starbucks and go “I see your laptop has an Intel CPU inside! Mine too, this is so awesome! Much better than AMD, don’t you think?”.

No, all they do is waste 5 minutes of your time as you remove them and then clean up the sticky bits that they leave behind.

IMG_8103.jpg

The silver lining in this case was that all 3 stickers that come on the Zenbook are mostly in grey-ish tones similar to the laptop’s body – no flashy ugly colors here. So it’s not completely horrible. Just annoying.

Bloatware

Another expected downside of buying a PC: all the bloatware that comes pre-loaded with it. Everything, from the moderately useful to the totally useless, can be found on the Zenbook once you boot it up. I counted around 20 “utilities” installed.

Of course, I opted for the usual practice of wiping everything and installing a fresh Windows on it. I had to fight the BIOS for a long time, trying to get the default GUID Partition Table to work with my bootable USB key. But the UEFI boot mode wouldn’t work for some reason, so eventually I gave up, re-allocating the whole hard drive as a more classic Master Boot Record partition. It’s kinda sad, this being 2012 and all, but as I understand it, I’m not losing much when it comes to a single 128Gb partition.

Of course, once Windows is installed, the fun has only begun: you still have to go to ASUS’ website to get the drivers for all kinds of components, along with downloading a whole bunch of updates from Microsoft.

Conclusion

The Zenbook Prime, once you’ve been through all those hoops, is a wonderful little machine. It looks gorgeous, is very portable, and the performance is pretty good. The accessories are also nice, especially the leather-ish case for both the laptop and some of the cables. The power cable is the only poorly designed component in the whole package in my opinion, but overall, I recommend this laptop highly.

IMG_8098.jpg

But going through all these steps was pretty tedious. In comparison, the shopping and out-of-the-box experience of a Mac goes along these lines:

  • Go into any computer store.
  • Try any Macbook.
  • Give lots of money.
  • Boot it up.

I guess Microsoft Stores and Samsung Stores and the like are trying to address the issue, but it’s a long way away.


The state of Diaspora

In the main article about the road to Diaspora, we looked at setting up our own pod to interact with the Diaspora federated community. Now we’re going to look at how that actually works. Or not. Because since I set up my pod a few weeks ago, I’ve had nothing but problems.

The river of problems

To give you an idea of how bad Diaspora is, even after a couple years of development, look no further than the bug tracker on their Github project page. After a week of using my pod, I had already posted half a dozen issues – and that’s for just one user on a lonely pod. I wasn’t following a lot of people either. Some of those issues have been closed, some are still open. But the point is: this is not good for a project that’s been in beta testing for so long.

Then, there’s the performance issues. Ruby on Rails has never had a very good reputation in this department, but I thought it was mostly a “haters gonna hate” kind of reputation, or some remnant of the framework’s early days, when it was not necessarily very optimized. But after running Diaspora, I’m revisiting my opinon. On an Ubuntu VM with 512Mb RAM, the thin server that runs Diaspora is the only process that ever got killed for running out of memory – neither WordPress or MediaWiki or anything else ran into this situation. And that happens on a regular basis. We’re talking a couple times a day on average. And don’t even think of running a Rails console at the same time: it may work for a while, but you better get done quickly because one of the 2 processes will die soon enough.

And then there’s the whole “federation” aspect, which is the whole point of Diaspora. Spoiler warning: it doesn’t work. I don’t know if it’s related to the performance problems above (the process could be dropping important bits of information when it gets killed) but I always seem to be missing posts and updates from anybody else that’s not on my pod. Doing some testing between my pod and a few other well-known pods like joindiaspora.com or diasp.org, I get completely random results: sometimes a comment or “like” gets propagated in a few seconds, sometimes it takes hours, and sometimes it just doesn’t show up at all on one end or another.

I could go on and on about all kinds of little problems, from the completely stupid “PersonName started sharing with you!” email that doesn’t make any sense (because it means they’re following you, which means they won’t share anything with you unless you follow them back) to some weird design decisions around hashtags (try figuring out how to follow hashtags from other pods) to some obvious problems that never seem to get fixed (like useless Diaspora links to your Twitter cross-posts, or the inability to cross-post public posts to Facebook). I guess that’s what the Github bug tracker is for but, again, this really doesn’t look like a project that’s more that 2 years old.

Diaspora’s future

The original Diaspora founders recently left the project, saying they’re “giving it back to the community” while they “take the back seat”, a.k.a. Makr.io, a very stupid website concept. Some people say that’s the death of Diaspora, but judging from the state of it, it may actually be what saves it. The project was obviously badly managed and designed from the start, so maybe, just maybe, having a community of better programmers take over the codebase would make it viable.

There’s also the very, very slim possibility of someone either re-implementing Diaspora using a different framework than Ruby on Rails. Diaspora is mostly based on open standards – although the keyword here is “mostly” – so it would be possible to rewrite something from scratch that’s compatible.

There’s also the very, very slim possibility of someone writing something else that works better. Mike Macgirvin, the creator of Friendica, recently started working on “Project Red”, which is “Friendica taken to the next level”. His announcement didn’t hesitate to make fun of Diaspora:

Friendica WORKS today (unlike similar projects which are still struggling at basic communications after two years, and after squandering huge amounts of money).

Seeing how Friendica works and is easily installed – even though it’s ugly and useless to me – Project Red could be a good thing if it’s based on asymmetric relationships and Mike can enlist the help of a moderately talented web designer.

In the meantime, I don’t think we’ll get people away from Twitter and Facebook any time soon.