The Stochastic Game

Ramblings of General Geekery

Microsoft password fail

Almost 4 years ago, I wrote a short article on dumb websites who have a maxiumum password length.

Now, in 2012, there are still websites with such stupid policies. One of the most famous is none other than Microsoft’s Live Account service, which serves as the authentication hub for all things Microsoft. Basically, your Live ID, or whatever it’s called, can’t have a password longer than 16 characters.

Microsoft is, rightly so, getting a lot of criticism about that because the recently released Windows 8 lets you link your Windows user to a Live ID, for use with the whole Windows App Store thing and more. This means, in most cases, that your Windows password can’t be longer than 16 characters, and, in other cases, that you have all kinds of weird account related bugs and you may need to enter a truncated password.

Oh, Microsoft.


When Windows “just works”: part 2 (the work around)

Back in the first part of this 2-part post we looked in some detail at how MacOS mounts network shares, and how badly designed this feature is compared to its Windows counterpart.

We’ll now look at the solution I’m using to fix the problem, which is to mount network shares in a consistent way for a multi-user machine.

The work around

Thanks to the power of UNIX, you can get around the problem by spending hours reading boring documentation, searching useless forums, editing obscure configuration files and generally speaking wasting your time for something Windows gets right in 2 clicks.

If you do the research, you will find lots and lots of solutions – some clever and some completely stupid. Not all of them worked for me, so here’s the solution I came up with. It uses the autofs feature supported by most recent versions of MacOS, and has some pros and cons. “Your mileage may
vary
”, as they say.

Creating custom mount points

Start by editing /etc/auto_master. It should look like the following, but without that last line. Add it using your favorite text editor (which you may have to run as root):

#
# Automounter master map
#
+auto_master		# Use directory service
/net			-hosts		-nobrowse,hidefromfinder,nosuid
/home			auto_home	-nobrowse,hidefromfinder
/Network/Servers	-fstab
/-			    -static
/-              auto_smb

Now create /etc/auto_smb and write something like this:

/Users/user1/Volumes/MyStuff -fstype=smbfs ://username:password@server/MyStuff
/Users/user1/Volumes/media   -fstype=smbfs ://username:password@server/media

/Users/user2/Volumes/SomeOtherStuff -fstype=smbfs ://username:password@server/SomeOtherStuff
/Users/user2/Volumes/media          -fstype=smbfs ://username:password@server/media

This file tells the automounter to treat the folders specified at the beginning of each line as mount points for the network shares specified at the end of each line. The file protocol I’m using is smbfs, which is the Windows file-protocol. You could try to use afp here, which is the Apple File Protocol, but I ran into permission problems on files I was editing or creating on my server. Also, depending on your server, smbfs may be faster.

I’m mounting my network shares in each user’s home directory, in a ~/Volumes folder, but you can mount them anywhere else – the whole point here is to have one “mount sandbox” per user instead of the system-wide /Volumes.

Also, here I’m specifying network shares with a custom username and password. You may not need that if you’re using the same credentials on both the local machine and the server. Of course, replace server with your actual server name or IP.

Once that’s done, run sudo automount -vc to tell the automounter to refresh. If it doesn’t work, you may have to create the ~/Volumes directory, along with the mount points themselves. I also ran into problems where the automounter would still be a bit confused and had to reboot to make things work, so you can try that as a last resort.

Accessing the network shares

That’s the totally lame part: the mount points you created are not displayed in Finder until they are actually mounted. They will show up only after they’re mounted, which only happens when needed, i.e the first time they are accessed. So, at first, your ~/Volumes directory will be empty – at least in the Finder (if you go there using a Terminal, you will see the mount points, and you can cd into them, which would trigger the mount and make them appear in Finder! Yeah, it’s confusing, I know, welcome to the Apple ecosystem).

How do you access your network shares if you can’t see them in the Finder, then? Well you go directly to them with “Go > Go to Folder…” and enter, say, ~/Volumes/media:

You would only have to do this once in a while because once they are mounted, you’re unlikely to unmount them (you would actually need to use the command line for that). Also, once you gave that path to an application like iTunes or Lightroom, those applications can trigger the mount themselves by merely accessing the path: the next time you reboot and launch the application, it will all magically work (although you may notice a pause for a second or two as the system mounts the network share).

Alternatives

If you want the network shares to be visible in Finder as soon as you login, you can either:

  • Make a login item (a Bash or Apple script, for example) that will trigger the mount by accessing the path somehow.
  • Specify mount points in /etc/fstab instead of through the automounter.

I myself prefer staying with the automounter, because it means I won’t get a timeout when I log into my laptop but I’m not at home. Since I rarely reboot, it’s very uncommon for me to have to manually re-mount my network shares.

Conclusion

This is what makes it possible for me to use my home server along with sharing my Macbook with other users. It was a lot more complicated than I anticipated, but I guess making Macbooks play nice with custom servers would go against Apple’s new cash cow, a.k.a “iCloud”.


When Windows “just works”: part 1 (the problem)

If you asked me a year ago what was the most awesome feature that Windows has and that MacOS doesn’t, I would have probably scratched my head for a bit, mentally sorting through all the obscure advanced things you can do with the Windows SDK and a few lines of code, or all the little things that make organizing files so much easier than with the horrible Mac Finder.

But if you ask me now, I’ll reply straight away this: mapped network drives.

You would think there’s not much to it, but this has been my biggest problem as a user when I switched to a Macbook Pro as my main machine. In the first part of this 2-part post, we’ll look at the problem. Go to part 2 for the solution I’m using.

WTF is a mapped network drive?

In case you don’t know, here’s a little crash course in mapped drives. You can skip ahead if you’ve already used them.

Windows famously uses drive letters for each physical or logical drive hooked up to your computer. This means in most cases you would access your main system drive as C:, some secondary drive as D:, a USB thumbdrive as E:, etc. However, you can also be part of a network, and this means you would access a server’s shared directory as \MyServersharename.

To make things simpler, you can map a server’s shared directory to a new drive letter like Z:. This is not an advanced feature. In fact, it’s always right there in the explorer’s toolbar when you open “My Computer”.

By the way, you can see here my typical setup at home, where I mount 3 shares from my file server: one for my personal data, one for the family data like pictures and home videos, and one for miscellaneous media like movies or music.

Setting up a mapped network drive takes about 2 clicks and a few keystrokes:

  1. Click on “Map Network Drive”.
  2. Type your server’s name and share name.
  3. Optionally enter specific credentials if the share is protected.
  4. Done!

At this point, you have a simple and convenient access to your server in a way that’s totally transparent to any application you may run. You can for example run a media manager like iTunes or MediaMonkey and tell it to look for music in Z:Music. It will always work – at least as long as you checked the “Reconnect at logon” option, and you’re of course connected to your network.

Doesn’t MacOS have something similar?

It does. Kinda.

When you’re connected to your network, you will see the available servers in the “Shared” sidebar of the Mac Finder:

My server here shows up multiple times because it exposes several file protocols that MacOS understands (comparatively, out of the box, Windows only understands its own file protocol and not the Unix or Apple ones).

Once you click on a server, you can browse its shared directories that are available to you, and when you dive into one, it will “mount” it on our file-system. You can tell it’s mounted because the Mac Finder will show an “eject” button next to each mounted share, and next to the server itself. Here I mounted 2 shares:

Just like on Windows, the server share is mounted in a way that should be transparent to applications. So it should all work out fine, right?

Well, no.

Under the hood

MacOS is based on a UNIX-like architecture, so there are no drive letters involved here – only mount points that look like directories on the local file-system.

Sure enough, if you open a Terminal and go look into /Volumes, you’ll see the mount points for the stuff I mounted in the previous screenshot:

My media folder on my file server is mounted as /Volumes/media, and if I point an application like iTunes to it, that’s the actual path it will use behind the scenes.

The problem of multiples

What happens if another user (say, my wife) logs in and also wants to mount the media folder to listen to some music? Or what happens if you want to mount another share called media from another server?

Well, MacOS, just like any other UNIX-based system out there, does something very stupid: it appends a number at the end of the mount point:

My wife’s media folder is, under the hood, mounted as Volumes/media-1. Other attempts at mounting something with that name will result in Volumes/media-2, Volumes/media-3 and so on.

Of course, this all depends on who mounts things first. If I was the one who logged on to the Macbook later, I would have been assigned media-1 when my wife would have media.

Compare that to Windows’ mapped network drives, which are user-specific, which means there’s no problem as long as you assign the same drive letter to the same network shares – which is a trivial thing to do.

Bad things happen

This is where things break down rapidly.

Did you originally, and unwillingly, tell iTunes your music was in /Volumes/media? Well, now your whole library is empty because it can’t access that directory – it either belongs to one of the other users, or it has been unmounted. Worse, it could actually point to a completely different share that you mounted from another server because it has the same name.

The same thing happens with your Lightroom pictures, your documents or presentations with embedded assets, and any other program that stores paths to some type of data you may have on a file server. All of a sudden, life is not worth living anymore.

The cynical in me tells me Apple doesn’t care about you if you’re sharing your laptop with someone else anyway – they’re not interested in cheap people – but it could actually happen even if you’re the only user: if there was a problem with the server, or with the network connection, well, let’s just say MacOS will be very happy to give up trying to mount your network share at its original mount point, and will instead do it with a new mount point, again with the number suffix. So you could end up with /Volumes/media-1 even if you’re the only user logged in.

Keep reading to part 2 for my work-around.


Piecrust 0.8.0

The 0.8.0 (and even 0.8.1!) version of PieCrust has been tagged in the stable branch.

mum's lemon meringue pie

As usual:

  • I still need to write some documentation on the new and/or changed features.
  • I’m really not good at keeping a single release focused around a small set of consistent new features. I tend to pack different unrelated features mixed with bug fixes as they come to me, and the result is a bit messy.

You can read about the changes in the CHANGELOG, or keep reading for a detailed description of the highlights. Or you can just go and grab it from BitBucket or Github and trust me that it’s awesome! (but wait, you should at least read the first couple sections here below because there are a few breaking changes).

New folder structure

As we get closer to a 1.0 release, I changed the folder structure to make it look more like a genuine application. It used to look like an already installed system (a _piecrust folder with the code and a sample website folder), which was nice during early prototyping and development. Now, however, it doesn’t feel right anymore, especially since chef has evolved into a sophisticated tool.

So the folder structure now has a bin folder with chef, the usual src, libs, tests folders, and a single piecrust.php file to include at the root.

For backwards compatibility purposes, the important files are also available at their old location (chef and piecrust.php) but they will issue a warning message if you use them from there. They will be removed when we hit version 1.0.

Broken stuff

There was a few things that didn’t quite make sense that I also fixed, unfortunately resulting in breaking changes:

  • When pretty_urls are disabled, pages with pagination will have the same first page URL as if they didn’t have sub-pages (e.g. foo/bar.html). Sub-pages will have URLs like foo/bar/2.html. This makes URLs more consistent.
  • PieCrust now supports extensions other than .html for pages. Before, you had to make all files in _content/pages with an .html extension, and use the content_type configuration setting in the header to change the output to something else, like .xml or .rss or whatever.
  • Now, you can actually create files like _content/pages/feed.xml and it will be handled correctly. This means content_type is now only meant for setting HTML response headers in CMS mode, which makes more sense.
  • Some chef commands had some inconsistently named options: some would have hyphens, some would have underscores, and some would just concatenate words together. I don’t know why I never noticed it until now… In most cases, the old option name is still available, but will issue a warning that it will be removed when we hit version 1.0.

Multiple formats

Thanks to the magic of open-source, PieCrust now also supports changing the format of a page right in the middle of the content. This is useful when you’re using a minimalist format like Markdown and just need that extra feature at one point to control some CSS or HTML property without having to write it all verbatim:

So here I am, writing some stuff. It's all _nice_ and _fun_.

But then, I need to write a boxed piece of text that will be laid out nicely with CSS. Let's use HAML!

<--haml-->
.boxed_text#tutorial1
  .left.column
    :markdown
      This is some **cool stuff** right here!
  .right.column
    :markdown
      Hey, this is nice too!

<--markdown-->
Ok, back to normal, now. **Whew!**.

This example obviously uses the PhamlP plugin (which adds the Haml and Sass languages), but you should get the idea.

Also, if you want to start a content segment using a different format than the one defined for the page, you can append the name of the format you want to use like this:

---newsegment:textile---
Another content segment using **Textile** formatting!

Templating features

The iterators obtained through pagination, blog.posts or link have a few new tricks:

  • You can sort posts and pages with a new .sortBy(foo) function. Here, foo is the name of a configuration setting in the pages’ or posts’ headers that will be used to order those pages or posts.
  • You can now access the content segments of pages and posts you’re iterating on – not just metadata like title and date.
  • You can use paginagion.all_page_numbers to get, well, all the page numbers for the current page. You can also use pagination.page(i) (where i is a number) to get the link to a specific page. This will help you build pagination footers more easily.

Portable baking

The --fileurls option to chef bake used to bake a website with absolute local paths for all links created with the pcurl family of template functions. This made it easy to just bake a site and double-click on the main page to preview it locally without any web server.

It was also flawed, because it meant the site could only be previewed in that exact place on the file-system – you couldn’t give it to a colleague or client to review.

A new option, --portable, effectively replaces the --fileurls option (which is now deprecated). It will create relative links (with lots of ../ in them) so that you can move the baked static files around and still have them previewable locally in a browser.

Debug info in CMS mode

For people running PieCrust in dynamic CMS mode, you can set site/enable_debug_info to false in the site configuration to disable the ?!debug feature, which could potentially expose private information to visitors.


Why PieCrust?

Recently I realized I’ve been working on, and talking about, PieCrust for quite a while, but I have never given any reason as to why on Earth I have written my own CMS and static site generator when there are so many already out there.

Like many open-source projects, PieCrust was born out of 2 very self-centered stimuli.

One: it’s fun to write new stuff!

And by “new”, here, I’m of course talking about things that were new to me at the time. Writing PieCrust, I learned a lot about web technologies, from HTTP servers to caching and proxies and whatnot.

With my next projects, I’ll learn a lot more – I’m looking at databases and scalability and security and other bigger, more complex things – but it was good to start with something simple and basic.

Two: everything else sucks!

Well, that’s not true, but you know what I mean. The average developer (and that includes me) is usually never happy with what’s available out there.

Most other static site generators are completely OK – they work well for the most part – but they have 2 flaws: they’re simplistic, and they’re doing it wrong.

Simplicity

Most other engines are too simplistic. They confuse simplicity with a lack of features. I believe you can have a fairly fully fledged static site generator without sacrificing how simple it is to use and work with.

Hopefully I succeeded with PieCrust. You tell me.

What the user does

Most other engines are built from the ground up to be static site generators. This means that when it’s time to preview your work in a browser, what they do is just re-generate the whole site and show the resulting HTML file matching the request. Thankfully, they do it incrementally, like a compiler, which means they only generate what was changed, but it’s not ideal unless you’re just looking at simple pages.

If you’re modifying a page with a lot of dependencies, that re-generation can take several seconds, which means you have to wait until you see your updated work. This happens, for example, when you’re working on a blog post that has various tags and categories: updating it means not only re-generating that post’s page, but also the home and/or archive page on which it appears, and all the relevant tag/category listing pages. Some static site generators take a few shortcuts here by not re-generating the whole site in that case, resulting in glitches in the preview.

This “compiler-like” approach works well when you always want the full result of the generation, but in the case of a user working on some content for his website, he only wants to see one page. You can only press F5 on one window at a time. So whatever dependencies a piece of content has, the engine should be able to just generate what the user wants to see. And this is what CMSes have been doing for a long time.

That’s why PieCrust is built from the ground up as a database-less CMS. Sure, it can generate a static site, but that’s just a side-feature. The focus is on what the user does the most (iterating between writing and previewing), and not what the user wants (a static website). What he wants happens later, once he’s done working.

The nice thing is that PieCrust ended up being both a good CMS (especially if you use it with a reverse proxy) and good static site generator (I’ll post some comparative benchmarks someday).


Piecrust 0.7.0

The new version of PieCrust has been tagged in the stable branch yesterday, and oh boy does it have some cool features. You can read about them in the CHANGELOG, or keep on reading for a more detailed presentation.

Rhubarbed Strawberry Daiquiri Tart Plated

Blog archives

Until now, if you wanted to display a list of posts on your website, you had to go through the pagination.posts variable, which has the nasty side-effect of generating pagination for the current page (i.e. the page would have as many sub-pages as needed to show all your posts, with some expected links like “next entries” and “previous entries” to navigate between them). You could of course use the single_page setting to prevent any sub-pages from being created, but that was a bit awkward, especially since you also probably wanted to tweak other things, like the number of posts to show.

Now there’s a more straightforward way with the blog.posts variable, which returns all your posts without any limit or filtering. It also doesn’t affect pagination in any way. The nice thing is that you can still skip, limit or otherwise filter the posts.

For example, on my own home page I show the latest 7 posts. This can be done like this:

{% for post in blog.posts.limit(7) %}
* [{{ post.title }}]({{ post.url }})
{% endfor %}

You can also use Twig’s built-in slice filter to skip and limit the posts, as in:

{% for post in blog.posts|slice(0, 7) %}
* [{{ post.title }}]({{ post.url }})
{% endfor %}

But you can also do more advanced things, like:

---
my_filter:
    has_tags: cooking
    not:
        is_status: draft
    not:
        has_tags: appetizer
---
{% for post in blog.posts.filter('my_filter').limit(5) %}
* [{{ post.title }}]({{ post.url }})
{% endfor %}

The blog variable also contains other cool things, like tags and categories, and, because this whole feature is called “blog archives” after all, years and months. This makes it possible to do the following:

# Blog Archives

{% for month in blog.months %}
## {{ month.timestamp|date('M Y') }}

{% for post in month.post %}
* [{{ post.title }}]({{ post.url }})
{% endfor %}
{% endfor %}

For more information about the blog archives, go read all about it on the official documentation.

Chained file processors

This is pretty cool: PieCrust can now chain your file processors when baking (and even previewing!) your website.

This means that if you have, say, the LessCSS and YUICompressor processors (the first one ships with PieCrust, the second one is available as a plugin), not only will your Less files be compiled to CSS, but they will also be further processed by the compressor, resulting in a compiled and compressed file!

There’s an example of how it works on the official documentation.

New pagination features

The pagination object now has two new properties: next_post and prev_post.

These objects will be valid if the current page is itself a post, and they will point to (unsurprisingly) the next and previous posts in the blog. Those are the full blown post objects, which means you have access to the title, timestamp, etc.

This is useful if you want a blog layout where you have links to those previous/next posts at the bottom of each article (like this very blog!).

New chef commands

Chef, the command line tool for PieCrust, got a few new commands of its own:

  • chef showconfig prints parts or all of the website’s configuration settings.

  • chef find lets you find pages, posts and templates in the current website.

  • chef plugins now has sub-commands to let you search and install plugins from a plugin source. Right now, the only type of plugin source that works is a BitBucket source (it will look for any repository that starts with PieCrust-Plugin-), and the only default source is my own BitBucket account (you can add more sources with the site/plugins_sources configuration setting).

New importers

You can now import posts and pages from a Jekyll or Joomla website.

Also, note that the chef import command changed:

  • The format and source options are now mandatory arguments – although if you use the old syntax you will be warned about the change.

  • You can now specify importer-specific options, like the WordPress SQL table prefix, as a command-line option, which is a lot better than the previous awkward connection-string suffix hack.

Twig extensions

The built-in Twig extension now comes with the additional following features:

  • A striptag filter, which strips outer HTML tags from a string.

  • A textfrom function, which includes text from an arbitrary text file. This is useful if, like me, you write your blog posts as simple text files stored in Dropbox, but you sometimes want to preview the article in the context of your blog without having to copy/paste the text.

Also, you can set the twig/auto_escape site configuration setting to false if you don’t want Twig to auto-escape your stuff automatically. This means you won’t have to pass all your articles through the raw filter in your templates anymore, but it means you will have to make sure you escape other stuff accordingly.

Miscellaneous changes

  • Pages are now cached differently: they used to be parsed, formatted and rendered in one go, with the final result cached to make it super fast to re-use it templates. However, it broke some advanced use-cases with Twig and other template engines that support inclusion and such things.

    Now, only the parsed pages are cached – formatting and rendering will happen every time the page is requestedduring a session. This actually doesn’t affect performance that much because, within a session (a preview request, or a baking), the final page is cached in memory anyway.

  • The bake information file (bakeinfo.json) used to pollute the output directory. Not anymore, as it’s now saved in the cache directory.

  • The Less and Markdown libraries have been updated to their latest version.

That’s it! Go grab PieCrust 0.7 from Github or BitBucket as ususal, and be sure to send me feedback and bug reports.


Catching up with CodePlex

For some reason, for the last year or so, I received occasional notifications about forum posts on my .NET projects over at CodePlex, but never anything about new issues being reported… well, I recently had a look at the issue tracker and found, horrified, that I had dozens of issues that had been sitting there for several months, untouched. Sorry about that. I guess the lessons are:

  • For me: check the notification settings on any code hosting website.
  • For CodePlex: maybe you guys could add the current number of open issues next to the “Issue Tracker” link in the navigation menu, just like what GitHub and BitBucket do.

Anyway, most of the bugs and feature requests were easy enough, and in a couple days I solved most of them (only 1 got rejected). This included mostly Textile.NET and IronCow. To celebrate this, I also got Textile.NET migrated to a Mercurial source control.


The Journey To Digital Comics: Manga Apps

In the previous step in the journey to digital comics we looked at american comics – my main source of graphical entertainment. This time, we’ll look at mangas and its derivatives (manhwa, etc.), which used to be my close second until I became too old to read about high-school girls, alien high-school girls, demon alien high-school girls, and miniature gender-swapping demon alien hunter high-school girls. But then I figured, fuck it, I’ll just look like a creepy old guy in the bus. No worries.

IMG_6354.jpg

A bit of history

Depending on how you look at it, digital mangas are either way beyond the american comics offer, or way behind. That’s because mangas and anime always had a vibrant “ethical piracy” scene, with scanlations and fansubbing. Most of the productions coming out from Japan or elsewhere in Asia historically, well…, never came out of there. It was impossible for fans anywhere else in the world to read those books or watch those TV shows. As a result, a community of translators, scanners and recorders was born.

That community was originally dedicated to supporting the original authors, however: whenever a specific series was licensed in the USA, the scanned manga or recorded TV series would be removed from the servers, and visitors would be gently redirected to the website of the company who licensed the product.

Of course, it didn’t take long for the community to branch up in several groups that would not necessarily follow that rule – whether it was because they catered to fans from other countries than the USA (there’s a lot of those) or to people who just want free shit (there’s even more of those).

Because of this pretty exhaustive free offer of digital mangas, the licensed and legal offer took a long time to materialize and, to this date, is still in its infancy.

As far as I know, the only available apps for legal digitial mangas on the iPad are Viz Media, Yen Press and Digital Manga Publishing (DMP). A couple more apps are available in the USA, like the Kodansha app, but not here in Canada – which shows how short-sighted some of those publishers are.

Some other publishers are at the bottom of the well. Square Enix, for instance, only offers to read your purchased book on your PC, as far as I can tell, and their website’s user interface is horrible – and I’m not the only one who thinks so. To give you an idea: it doesn’t even work in Google Chrome.

Worse, most of those previously mentioned apps are only available on iOS: only DMP’s app is available on Android. The other publishers’ books can sometimes be accessed via their website, but not always, which is bad. And when it is, it will mostly be Flash-based readers, which means that even on a good Android tablet your reading experience won’t always be optimal.

If that wasn’t enough, some publishers also handle purchases differently on the web and on the iPad, which means that the e-manga you bought may only be readable on one platform and not the other – this alone deserves a “worst idea ever” prize.

IMG_6359.jpg

You think it couldn’t get worse? Check this out: the Japanese Digital Comics Association is trying to reach out to its worldwide readers with an initiative called JManga. It’s a very nice idea: offer all their collective books through a unified web store… but if you go check it out, you will likely get a headeache – and also wonder if the website is legit because it looks like a spam/porn website run by Russian pirates. Also, as far as I can tell, there’s no iOS or Android app, which makes it useless (although some people are really dedicated to make it work).

IMG_6358.jpg

Overall, if you want a precise review of what apps are worth your time and money, you can read Manga Bookshelf’s “Going Digital” articles, like for example their “Manga on the iPad” wrap-up article from a couple months ago… but to be honest, I can’t find anything to be good enough at the moment.

The Viz Media app is the only promising one so far – it does what you expect, the interface is pleasant, the catalog is decent, and the prices are fair. The other apps, however, still suffer from various combinations of bad user experience, limited catalog, and digital mangas priced higher than their print counterpart.

Now given the little history lesson I gave in the introduction, you’re probably expecting a lot of apps offering scanlations on Android, and none on iOS (because Apple would be censoring them, right?) Well, not really. For some obscure reason, Apple approved several dozen apps and never looked back.

IMG_6353.jpg

Maybe it’s because they want a good offer in terms of digital comics to boost the iPad’s value proposition. Maybe it’s because they don’t know about scanlations. Maybe it’s because the legal status of those apps is more subtle than what you’d expect. Go figure. The only thing I know for sure is that the whole idea of iOS having better quality apps because Apple reviews every one getting on the AppStore didn’t really work in that case: a lot of those apps are shitty-looking, have horrible usability, or both. And I’m not even talking about the crashes.

All of those apps, however, have something in common: they all get their books from the same sources, namely MangaFox, Mangable and Mangareader. This means they all have similar catalogs, with similar quality, so the only differentiator is the user experience.

IMG_6352.jpg

As far as the iPad is concerned, the best app in that category was, hands down, the “MangaRock” collection of apps from Not A Basement Studio. It’s a bit confusing at first, because there are 3 apps, and it’s not obvious which one does what, so I’ll save you the trouble of finding out:

  • MangaRock and MangaRock MF are mostly designed for the iPhone, and only download books from, respectively, Mangable and MangaFox. They’re free, however, and have an iPad layout, so you can try those first.
  • MangaRock Unity is designed exclusively for the iPad. It’s not free, but it pulls books from all 3 previously mentioned scanlation websites simultaneously, and has a better suited UI. This means it has a better user experience overall.

Conclusion

At the moment, there is absolutely no incentive or advantage for you to read 100% legal digital mangas except for the warm and fuzzy feeling of being honest and spending your cash. If that’s you, then try the Viz Media app and its web counterpart – none of the other ones are worth your time and money for now.

For the other series not owned by Viz, I couldn’t recommend anything else than MangaRock Unity or, you know, the legal dead-tree real-world-space-taking book. The quality of the average scanlation is usually well below that of an official e-manga, and the translations are, at best, full of typos, but it’s good enough, especially if you just want to try something before buying the printed book.


Mercurial’s onsub and mixed sub-repos

If you’re using Mercurial with mixed sub-repositories (i.e. sub-repositories handled by different revision control systems), you may be interested in this: I just got a patch accepted into the onsub extension.

The extension lets you run commands on your sub-repositories. For example, with my own dotfiles repository, running on Windows:

> hg onsub "echo I'm in %HG_SUBPATH%"
I'm in libhghg-git
I'm in libhgonsub
I'm in vimbundlebadwolf
I'm in vim/bundle/colorschemes
I'm in vim/bundle/commentary
I'm in vim/bundle/ctrlp
I'm in vim/bundle/easymotion
I'm in vim/bundle/fugitive
I'm in vimbundlegundo
I'm in vim/bundle/haml
I'm in vimbundlelawrencium
I'm in vim/bundle/markdown
I'm in vim/bundle/nerdtree
I'm in vimbundlepiecrust
I'm in vim/bundle/powerline
I'm in vim/bundle/ragtag
I'm in vim/bundle/repeat
I'm in vim/bundle/solarized
I'm in vim/bundle/supertab
I'm in vim/bundle/surround
I'm in vim/bundle/syntastic
I'm in vim/bundle/vimroom

As you can see, I’ve got quite a few sub-repos. However, some are Mercurial sub-repos, while others are Git sub-repos (that’s one of the nice features of Mercurial: it has decent interop with other RCSes). Which ones are which, though? That’s easy, there’s a new HG_SUBTYPE environment variable now:

> hg onsub "echo I'm in [%HG_SUBTYPE%]%HG_SUBPATH%"
I'm in [hg]libhghg-git
I'm in [hg]libhgonsub
I'm in [hg]vimbundlebadwolf
I'm in [git]vim/bundle/colorschemes
I'm in [git]vim/bundle/commentary
I'm in [git]vim/bundle/ctrlp
I'm in [git]vim/bundle/easymotion
I'm in [git]vim/bundle/fugitive
I'm in [hg]vimbundlegundo
I'm in [git]vim/bundle/haml
I'm in [hg]vimbundlelawrencium
I'm in [git]vim/bundle/markdown
I'm in [git]vim/bundle/nerdtree
I'm in [hg]vimbundlepiecrust
I'm in [git]vim/bundle/powerline
I'm in [git]vim/bundle/ragtag
I'm in [git]vim/bundle/repeat
I'm in [git]vim/bundle/solarized
I'm in [git]vim/bundle/supertab
I'm in [git]vim/bundle/surround
I'm in [git]vim/bundle/syntastic
I'm in [git]vim/bundle/vimroom

That makes it possible to do something slightly different depending on the sub-repo type, but it’s still tedious. For example, the most common operation for me is to pull and update all those sub-repos. The commands are different (hg pull -u vs. git pull) and doing an if statement in Bash or Cmd is cumbersome, especially as a one-liner argument.

That’s where the other new feature comes in: there’s a new -t/--type option that filters sub-repos based on their type:

> hg onsub -t hg "echo Mercurial subrepo: %HG_SUBPATH%"
Mercurial subrepo: libhghg-git
Mercurial subrepo: libhgonsub
Mercurial subrepo: vimbundlebadwolf
Mercurial subrepo: vimbundlegundo
Mercurial subrepo: vimbundlelawrencium
Mercurial subrepo: vimbundlepiecrust

This makes it easy to bring all the sub-repos up to date:

> hg onsub -t hg "hg pull -u"
> hg onsub -t git "git pull"

Hopefully it makes life easier for a few other people out there… it sure does for me!