One thing stuck out for me: Molyneux’s obsession with creating “living worlds”, i.e. games where you’re free to do many things (plant trees, build a house, have kids) and choose many paths (be good, be evil, choose this or that in each situation), and all the while witnessing the consequences of such acts. He’s not the only one trying to do this in video games, but he’s probably the one who tried it the most – or at least talked about trying it the most.
Technically speaking, this is a potentially fascinating problem. Will video game RPGs have to implement advanced AI and machine learning techniques for the game to truly react to your actions? Maybe. Hey, who knows, maybe Fallout 9 will be where the first sentient computer program emerges, after some guy in North Carolina has played it for 7 hours straight or something. But I’m wondering – is that even the point? Should video game designers strive for this kind of “perfect” sandbox experience? Or are they just working in the wrong medium?
There’s already a type of game where you’re free to do whatever you want, and the game world reacts accordingly – not only in a logical or plausible way, but also a narratively interesting way: tabletop, pen & paper RPGs… or, you know, just “RPGs”, as we called them back in the day1. If you’re writing a comic book while covering all pages with descriptions and inner monologues, maybe you should be writing a novel instead… and if you’re struggling to make a video game where you can do whatever you want, maybe you should be writing RPG books?
Damn you video games RPGs – especially JRPGs, who have close to zero “RP” in their “G”. ↩
I have shows for my kids that I’d rather they wouldn’t binge watch. For example, a weekly/6 months a year show like Dragon Ball is supposed to evolve with its audience. But if my kid watches 7 or 8 episodes a week because that’s all he ever wants to see when he gets TV privileges, it would take him only a few months before he ends up in front of the teenage power fantasies of the Saiyan Saga.
Say your kids watch stuff on Plex or Kodi or whatever. You can remove all the episodes of the show they’re watching by putting them in some separate folder, out of your HTPC’s reach. Then you use SaturdayMorning to bring the video files, one by one, every week day or every saturday or whatever you want.
With only one new episode ahead of them, you may find that your kids ask for TV slightly less often, diversify their shows, and/or get more excited about a “new” episode being available to watch.
I believe that in recent years, while looking for revenue models that work for electronic games, game designers and publishers have stumbled upon some formulae that work only because they abuse segments of their player population. Games can have addictive properties – and these abusive games are created – intentionally or not – to exploit players who are subject to certain addictive behavior.
It’s a good read, as Garfield tries to formalize what’s OK and not OK in games, with clear guidelines about gameplay aspects that make a game become “skinnerware”, while still allowing some gray areas. Of course, many people were quick to point out that his own game, Magic, falls, at least, in these gray areas. After all, a certain percentage of Magic players are known to spend huge amounts of money to acquire rare cards, and, generally speaking, buying more packs give you better cards which gives you some advantage.
What saves Magic from the skinnerware category, in my opinion, is largely that it’s a physical game, not a video game1, so whatever you buy still has value and can be sold back. The other thing is that its “power-ups for money” mechanism is not quite open-ended. True skinnerware games typically let you buy an endless amount of coins or jewels or energy charges or whatever. Magic, on the other hand, has a limited (although quite big) catalog of cards. Trying to get them all by buying booster packs quickly gets you diminishing returns because of the rarity of many cards, so you would quickly turn to individual purchases at market price. It’s still a shitload of money, but it’s a finite shitload.
I’ve never had even the slightest opportunity to get my own office1 so I frankly have no idea whether a private office would be an improvement – I just don’t know any better.
We do a fair bit of asynchronous communication, however. This is pretty much unavoidable, since, over here on the Frostbite Engine team, we have to deal with customers and co-workers that are spread across a dozen various places on Earth with up to 9 hours of time difference.
In some ways, however, it’s funny that Larson recommends replacing meetings with emails since a lot of my coworkers mainly complain about having to deal with too much email already. Also, the way he describes how a “quick” email conversation can replace a lengthy meeting is misleading since – having turned off all notifications and checking email only a couple times a day to improve productivity – this “quick” 4-message back and forth would actually take 2 days to complete.
In the zone
The part that caught my eye the most is the part about reaching a “flow state” – something that most people call being “in the zone”.
I have almost no problem reaching that state – even in an open floor plan.
Arguably, I’m not important enough to receive enough emails or meeting invites to experience the problems a lot of other people (most of them more senior than me, I assume) complain about, so that must help… but I basically get “in the zone” often enough that, on a regular basis, I finish a task, take off my headphones, and realize that it’s 2pm and that everybody had lunch already.
While most people use the Pomodoro technique to help protect themselves from distractions, I was, for some time, using that technique to help me take a break every now and then… because being “in the zone” for too long would frequently give me painful migraines (at least once a week). Even when I used Pomodoro timers on my phone, I would frequently not notice them going off!
Then again, I’m one of those people that most of you probably hate: the ones who can fall asleep in less than 5 minutes. So I suppose my brain and I really get along well when it’s time to shut off distractions. Yay brain.
Video game companies are almost all using open floor plans, and nothing will change that any time soon. ↩
It’s September 2016, and Apple showed once again some pretty cool hardware: dual cameras, clever asymmetrical core design, water resistance, blah blah. I’m not interested since I already have the very recent 6S (I’m not that rich or desperate) but it’s a very nice piece of technology.
The change that will create the most ripples on the rest of the market however is the removal of the headphone jack, I think. Actually, scratch that. The removal in itself is not that important – it’s what they replaced it with that’s important. Yet, 90% of the press gets hung up on the removal.
I think they’re all missing the point.
The matter of the jack port removal is temporary. It’s going to be very annoying for those of us whose audio needs are not limited to “one phone and one pair of headphones”, but it will be temporary. Hopefully.
I don’t imagine Lightning port headpones will take off – as a manufacturer you’d have to be crazy to invest in a proprietary connector for which you need to pay licensing fees, which is something you didn’t have to do before, and which would also prevent you from selling your products to half of your market. Plus, even low-cost manufacturers are already able, to some degree, to produce relatively cheap Bluetooth headphones. So that’s where the market will go, and where Apple wants to go anyway.
The real problem is that in my opinion Apple opened a can of worms with their wireless headphones: they run with a proprietary “secret sauce” layer on top of Bluetooth. Some people are worried about the potential for DRM but I’m mostly wondering if we’ll see some kind of “wireless protocol war” starting in the next couple years.
Right now, Apple’s “secret sauce” is supposed to be backwards compatible with normal Bluetooth devices, but you know how these things go. The proprietary layer will get bigger with each new release – I’m even expecting that you’ll have to download firmware updates for your headphones on a regular basis soon – but all those cool features will create envy. You can bet that someone like Samsung will come up with their half-assed version of another proprietary layer on top of Bluetooth, as a “me too” feature. Maybe there’s going to be a couple of those out there. Some of those implementations may have some kind of DRM, added under pressure from the movie or music industry, in exchange for some short term IP, marketing, or financial boost.
Eventually the Bluetooth SIG will try and draft some new version of Bluetooth that tries to fix all the basic problems that really should have been fixed before anybody decided to remove the jack port… and meanwhile, Apple has a 5+ year lead on wireless technology, keeps growing their accessory licensing revenue, and is laughing at how everybody else is still having trouble pairing headphones correctly. It’s like the dark ages of the W3C all over again, for audio.
So yeah, Apple is really clever here. I’ve got no doubt iPhone users will be buying increasingly more “W1” enabled headphones from approved manufacturers… it’s a smart move. But not a courageous one. Courage would be to open-source their Bluetooth layer. Courage would be to work with the Bluetooth SIG (which they’ve been a member of since last year) to improve wireless audio for everyone.