Tales From An Unchecked Mind

A Blog By Daniel Kennett

Mental Health

Until today, this story has been shared with four people in the world. My wife, my manager (who only knows the parts of the story he actually took part in), my friend Tim and my psychiatrist.

Recent events in both my private life and the world at large have convinced me that discussing mental health can only be a good thing. If you only visit here for posts about programming and stuff, you’re welcome to give this a miss. However, I’d encourage you to give it a read — who knows, it may help you or someone you know.

~

There’s nothing quite like writing a letter demanding a solicitors release money from your dead Mother’s estate to make you feel like the scummiest person on Earth.

This is the pivotal moment that poisoned my mind for six months, damaging a time that should otherwise be only filled with wonderful memories — those of marrying my wife and buying a house together.

Grieving the loss of a parent is always going to be a shitty part of your life no matter what. Not to go all Batman about it, but my father died when I was younger and it’s a process I’ve been through before, albeit with a much less mature mind at fifteen years old. This time around I was doing fine, all things considered, and had settled into some sense of peace once I realised (amongst other things) that since death is a fact of life, a parent passing before their child is orders of magnitude less of a tragedy than vice versa. Yes, it’d be lovely if nobody had to die, but that’s not how life works.

However, being her closest living relative, I had the task of supporting the solicitors assigned to administer my late Mother’s estate. In May 2013 I sat in their stuffy office in Stevenage in the UK and was assured that the estate should be a fairly simple affair and that they couldn’t see it taking more than six months to complete. Two weeks later they sent me a document that contained a few currency conversions, all of which had been calculated wrong. And so started the most harrowing set of bureaucracy I’ve ever encountered in my life.

~

My breakdown came in a one-on-one meeting with my manager in October 2013. In a previous blog post, I hinted that I was thinking of trying to “go Indie” again, and in October I’d committed to take six months off from March 2014 to try just that. All the paperwork was signed, and I was looking forward to making my own apps “for real” again.

However, the process with the solicitors was still trundling along and having to research the multitudes of laws at play to make sure they weren’t making more mistakes was really dragging me down, and it was starting to affect my productivity at work. I thought it’d be a great idea to move this six month leave from work forward to January so I could combine making the app with taking some time off to recuperate.

The flat “No.” that came out my manager’s mouth hit me like a brick wall. “…I’ve seen you planning this leave for months now, and every time you talk about it your face lights up. If you’re unwell, you should use medical leave to get better, and make sure your time away is kept a happy thing.”

I just kind of sit there in a dumbfounded silence.

“Obviously, the company has no legal grounds to deny this request, so if you’re sure you want to do this I’ll approve it, but I really don’t think you should.”

“…are you alright?”

I realise I’ve been sitting there in silence for what must have been a minute. I can’t see very well, so I take off my glasses to clean them. To my surprise, I realise I’m crying quite heavily.

“Go home. Take a couple of days to catch up on your sleep, then call the doctor and get help. Don’t worry about your work, I’ll cover for you and tell people you have the flu or something.”

“But…”

“Now.”

I get up in silence and scurry to my desk, staring at the floor hoping to God that nobody looks at me. I put on my hoodie and leave the office as fast as I can, tears streaming down my face. I don’t break pace until I’m sitting on the underground train, huddled in the corner with my hood up trying my hardest to be invisible. I’m sure everyone else thinks I’m some unstable lunatic as they avoid sitting near me. Hell, I feel like an unstable lunatic. I’m so upset and so confused that I’m genuinely wondering if I’d had a stroke, or if I’d ever be able to show face at work again.

After that day I do whatever I do when I’m going through a tough time, especially when I can’t rationalise and understand what’s going on — I withdraw into myself. Social interactions become more and more difficult as time draws on, mainly because I feel more and more incapable of functioning in civilised company. Suddenly the answer to “How are you doing?” becomes an elaborate lie, and the friend telling me about how he’s having a great time becomes a stab in the heart. Presenting myself for casual interaction becomes hard work, so I stop doing it.

Even online interaction becomes difficult eventually. You’ll notice a hole in this blog between October 2013 and July 2014 due to this, and while I can’t seem to find any tools to show my social network engagement over time, I’m sure you’ll find that my Twitter and Facebook use during that time dropped off a cliff, too.

Facebook in particular is completely devastating to someone going through a tough time. A quote I hear a lot goes along the lines of “You’re comparing your blooper reel to everyone else’s highlights” and it’s a fairly accurate little soundbite. My Facebook feed, just like everyone else’s, is full of people jostling for attention, desperate to prove they’re winning at life. Look at my amazing holiday! Look at my awesome new phone!

Instasham by Pandyland

Typically, all these posts don’t really bother me — hell, I do the same thing — and I unsubscribe from the people who are particularly obnoxious. In my broken state, however, each one of those posts became a slap in the face, causing me to further distance myself from my friends and coworkers. Eventually, I’d stopped talking to most of the people I knew and had stopped attending any social events at all.

~

Nearly a year later, I realise that manager practically saved my life. I didn’t return to work for well over a week after that incident, and only after doing as instructed — I spoke to my local doctor who referred me to psychiatric care at a clinic in Stockholm.

My psychiatrist (who I’ll call my doctor from now on, since it takes me about two minutes to type ‘psychiatrist’ correctly each time) and I pieced together what had happened. In short:

  • My Mother had died.

  • Being the closest living relative, I had to support the solicitor appointed to deal with my late Mother’s estate.

  • The solicitor was, to put it nicely, requiring a lot of support.

  • Being in a different country to the solicitor made their multi-month gaps in communication all the worse.

All in all, I was having a pretty shitty time. The combination of these factors, though, was poisoning my mind and I couldn’t even see it. Looking back, my correspondence with the solicitor is all like this:

  • One email to the solicitors was discussing that a multiple month delay to “wait for some paperwork” wasn’t acceptable since the paperwork in question was super easy to get, never mind the fact that I had copies of all of it that I’d have been able to supply in ten minutes if they’d only told me what they needed.

  • Another email discussed Section 27 of the UK’s Trustee Act 1925 and the liabilities of the various parties dealing the the deceased’s estate before and after the Section 27 notice was filed.

  • Yet another discussed finding records from my Father’s death and supplying them to the UK Tax Agency to get the correct tax status for the estate.

All of this is completely normal stuff to be discussing with legal representatives during this sort of process. However, the entire time it just felt like I was typing GIVE ME MY MONEY. GIVE ME MY MONEY. GIVE ME MY MONEY. over and over and over again. I felt like the worst human being alive simply for making sure my Mother’s estate was dealt with properly and promptly.

Except I didn’t know that’s why I felt horrible. I just felt like a horrible person and I didn’t know why.

Turning it Around

My doctor listened to my story and the most wonderful thing happened. She became autocomplete for my mind, filling in holes in my head I didn’t even know were there.

I told her about that meeting with my manager in which I completely melted down. “Yes, it can be funny how someone else noticing you’re not doing so well can make you realise how bad you’re actually feeling”, she said. I nodded a slight laugh and replied “Yes, it really brought it home”. Wait, it did? Of course that’s what happened! My mind flooded with a clear memory of that meeting — about how I was touched that my manager was so caring, about how I was really not coping well with the multiple jobs I had to do.

Later, I told her about the constant back-and-forth with the solicitor, and made an offhand remark about feeling “kinda greedy” even though I know, rationally, that this process has to happen no matter who does it. “That sort of thing can really eat at the soul, can’t it?” Of course that’s why I feel so bad!

On the way home, I’m bursting with excitement at finally being able to see my problems, and can’t wait to tell my wife all about it. Destructive Daniel kicks in and I start to feel guilty — my sweet, loving wife has done nothing but stand by me and support me while I turn myself inside out for months, and I end up going somewhere else to get help and am transformed in an hour. I wonder if she’ll feel like she’s not capable of supporting me. I wonder if I’m being selfish.

Of course, my wife is ecstatic that I’m making positive progress and I feel like an idiot.

~

Once we’d rooted out what exactly was wrong, we started a form of CBT which, very simply, is a treatment that embodies “If you were happy when you did x, you should do x.”. My doctor spent a session drawing a diagram on the whiteboard of the destructive cycle that’s common:

First, something crappy happens.

  1. Because you feel crappy, you can feel tired and have the instinct to stay at home and rest and avoid people.

  2. Because you’re at home, you miss out on the things that give you joy — seeing friends, taking part in hobbies, etc.

  3. Because you’re missing out on the things that give you joy, you feel more crappy.

  4. Goto 1.

“So, does any of this seem familiar?”

That session is when psychiatry ‘clicked’ with me and I realised just how powerful it was. Simply by drawing a diagram on a board, my doctor both showed me the negative cycle I was in and how to fix it. I don’t mean to belittle her when I say she “simply” drew a diagram — she drew it in the way I “simply” write a computer program to solve a problem or an engineer “simply” uses arches to build a bridge capable of holding up a desired load. Her years of training and experience allowed her to express a concept completely alien to me in no time at all.

My tasks were to force myself to do things I knew I enjoyed doing, even if I didn’t think I didn’t want to. I started out within the house — practicing guitar, doing stuff with my railway, etc. Once I was able to do that, I focused my efforts on actually leaving the house and attending social events. It took an absolutely herculean effort, but I was able to attend a group of friends’ weekly-ish “Hacktisdag” pizza-and-programming gathering again, if only for a couple of hours to start with. Baby steps.

From then on, I started recovering in leaps and bounds and I was able to pull myself out of the negative cycle with some dedication and help from my wife.

Back To Normality

When I think back to the six month period between late 2013 and early 2014, my mind should fill with happy memories of getting married to the love of my life and moving into a beautiful house together. Instead, my heart fills with a deep dread that the person I became — an introvert consumed by confusion and guilt, being driven away from his friends by his own hyper-destructive interpretation of events — is coming back.

It’s slowly getting better. Occasionally I’ll be taken by surprise as an otherwise innocuous comment manages to sting way more than it should, but those are getting rarer and I’m able to shake them off in a few moments.

I’m very lucky, though. By some absolute miracle, a wonderful manager noticed I was in trouble and led me to an incredible doctor who taught me how to identify when I might be having trouble and the steps I can take to mitigate the problems. I avoided depression. Just. Really, by the skin of my teeth.

When Robin Williams committed suicide, I read the same thought over and over again — what could someone so successful and wealthy possibly be so sad about?

I’m a successful young guy earning good money at a fun job, and I have lots of friends and live with my wife and dog in a beautiful home. I really have nothing to complain about. Yet all it took was a few emails and before I knew it I was so outside of my own mind I didn’t know which way was up. I wound up huddled on an underground train, scared and confused and crying and helpless.

And remember: I wasn’t even depressed.

We don’t hesitate to seek help with our cars, our computers, hell, even the rest of our bodies when they break. Somewhere along the way it became taboo to talk about mental health — a subject reserved for conversations that start with “Can you keep a secret?” or, too often, never start at all.

I really wish this would change. I genuinely believe it could save lives.

Epilogue

Just over a week ago I visited my Mother’s final resting place for the first time since she died — a remote spot on top of a mountain in the Alps near the French-Italian border.

My Mother’s Resting Place, ~2100m Above Sea Level Atop Combe Chauve

After an hour-long descent in a battered old 4x4, I slumped into my Mum’s old sofa and opened my laptop to find an email from the solicitor. They’d finished administering her estate, pending my approval.

That email was sent while I was standing on top of the mountain taking in the beauty of it all, less than five minutes after I took the above photo.

Perhaps I should have gone up there sooner.

Compile-Time NSLocalizedString Key Checking Part 2: Other Languages

In my last post, Compile-Time NSLocalizedString Key Checking, we explored a solution that elevated .strings files from “just another resource” to something that we could get compile-time checks on.

However, these checks only worked on our “master” .strings file — the one strings get added to first, typically in Base.lproj or your local development language.

This becomes a problem sometimes as I’m working in a flow like the one diagrammed below.

Translations take a while to get done – perhaps a week or more. In the meantime, I’m working on other things and sometimes updating the code for the feature that’s getting translated in a way that means the strings need to be updated.

Since this process is very asynchronous, we quite often bump into problems caused by out-of-sync translations while testing. This is normally not too much of an issue, but it’s a waste of everyone’s time if we give a build to a tester in another country and get a report back that some translations are missing.

Since we’re working with huge amounts of strings and with a third-party localisation service that isn’t integrated into Xcode at all, manually diffing .strings files is a pain, and is really a problem that should be dealt with by the computer.

The Solution

A picture tells a thousand words.

verify-string-files is a little tool I wrote (and is available on my GitHub) that emits warnings or errors if a string is present in the “master” .strings file but is missing from any localisations.

Usage is very similar to my tool that generates header files from .strings files, but a bit simpler – it takes a single input file, the “master” file, and automatically finds matching localised files.

To integrate it with your Xcode project, add a custom build step at any sensible point in the build process that runs the following script:

1
2
"$PROJECT_DIR/Vendor/verify-string-files/verify-string-files"
    -master "$PROJECT_DIR/Cascable/Base.lproj/GeneralUI.strings"

It uses /bin/sh as its shell, and I have the verify-string-files binary in the Vendor/verify-string-files directory of my project.

The tool will output log messages if it finds any problems, and Xcode will pick them up and display them just like my screenshot above. If you want the tool to output warnings instead of errors, add the -warning-level warning parameter to verify-string-files — a useful thing to do is have the tool emit warnings when debugging but emit errors if you try to make a release build with missing strings.

You can find the verify-string-files project over on GitHub under a BSD license. Happy localising!

Compile-Time NSLocalizedString Key Checking

There are two typical flows for using NSLocalizedString to localise your application’s strings:

  1. Type the “base” string into your source directly, then use genstrings to generate strings files. In your .m files, your calls look like this: NSLocalizedString(@"Please enter your name:", @"String used to ask user for their name");

  2. Type a unique key into your source, then add the string for that key straight to the strings file. In your .m files, your calls look like this: NSLocalizedString(@"CBLUsernameFieldLabel", nil);

There are various merits to both approaches, and I’m not here to argue which one is best. In my world, at work we use approach #2 because our localisation process kinda requires it, and I use #2 in personal projects because, well, seeing user-facing language in .m files gives me the heebie-jeebies — those files are for computer language, not people language.

This post is mainly for people who use approach #2.

Whither Error Checking?

At least once in your life, you’ll have seen something like this in your projects:

Even worse, if you’re using approach #1, you might not notice the missing localisation until you’ve shipped your product and start getting complaints from customers that half your app is in English and the other half is in Spanish.

The problem is that there’s no compile-time checking of strings files, and while there’s a few debug tools that you can use to spot un-localised strings, in the real world these won’t be run nearly as often as they should.

After extensive research (ten minutes of Googling) and a quick poll of Twitter (which resulted in one suggestion involving grep, and an argument) I couldn’t really find anything like this.

If You Want a Job Doing…

I ended up writing a little tool that takes a .strings file as an input and outputs a header file containing NSString constants for each key in that file. It turns this:

…into this:

Now we have compile-time checking that my keys are present and correct, and we get autocomplete for free. Much better!

The tool is very simple, and is 80% error checking. It reads the keys in using NSPropertyListSerialization and writes the found keys out to a header file. You can see the source over on GitHub.

Putting It All Together

To integrate this into your project, there are three steps:

  1. Generating the header files when your project builds.
  2. Telling Xcode where to find the generated files at build time.
  3. Importing the generate header files so you can use them.

First, you want to create a custom build step in Xcode before the Compile Sources build step to generate header files from your strings files. You could be less lazy than me and create a custom build rule to automatically do this to all your strings files, but I’m lazy. My custom build step looks like this:

1
2
3
"$PROJECT_DIR/Vendor/generate-string-symbols/generate-string-symbols"
    -strings "$PROJECT_DIR/Cascable/Base.lproj/GeneralUI.strings"
    -out "$BUILT_PRODUCTS_DIR/include/GeneralUI.h"

It uses /bin/sh as its shell, and I have the generate-string-symbols binary in the Vendor/generate-string-symbols directory of my project. It places the generated header file in the include directory of the build directory.

Next, you need to tell Xcode where to search for your files. Make sure your project’s Header Search Paths setting contains $BUILD_PRODUCTS_DIR/include.

At this point, you can start using the symbols in your project. However, you’ll need to #import your generated header files(s) in each file you want to use localised strings in.

To get around this, can #import them in your project’s prefix header file.

In my project, I have a “convenience header” which imports the generated files and provides a couple of helper macros to make localisation a little nicer, especially considering I use non-default string table names.

CBLLocalizedString.h
1
2
3
4
5
#import <Foundation/Foundation.h>
#import <GeneralUI.h> // Generated from strings file

#define CBLLocalizedString(x) NSLocalizedStringFromTable(x, @"GeneralUI", @"")
#define CBLLocalizedStringWithFormat(x, ...) [NSString stringWithFormat:CBLLocalizedString(x), __VA_ARGS__]

…and you’re done! You can find the generate-string-symbols project over on GitHub under a BSD license. Enjoy!

Photo Challenge: Time Warp

Over a year ago, I “reviewed” my Canon 6D. I love that camera, and it can take beautiful photos. However, it’s big, especially with the battery grip attached.

This means that I only take it with my when I actively want to take photos, which doesn’t include, well, 95% of my life. Sure, I always have my iPhone with me, which has a great camera for a phone… and until that suffix — for a phone — is gone, it’s not going to cut it as something to record photos that need to last a lifetime. It’s great at photos of landscapes or people where everything is roughly the same brightness, but as soon as you show it something challenging such as a scene with bright areas and dark shadows, the poor dynamic range sticks out like a sore thumb. As a friend put it: “So, let’s summarise: Your device which has no other purpose than to be a camera, is better at being a camera than your phone which is many devices. Who knew?!”

Anyway, my search for a small but high-quality camera led me to Micro Four-Thirds (MFT) cameras, and on to the snappily-titled Olympus OM-D E-M10. I’ve always had a soft spot for Olympus cameras (I still have an Olympus OM-1 which was given to me as a child by my father), and after a bit of deliberation I picked one up.

Once I got it home, I realised just how similar it was to my 1970’s-era OM-1!

Occasionally I pick up that camera and wish I could take pictures with it again. I have fond memories of using that OM-1 when I was younger and the excitement of dropping off my roll of film at the local chemist (followed often by the disappointment of all my photos being terrible). Perhaps it’s all nostalgia, but it’d be fun to take photos like that again.

So, I present to you…

The “Time Warp” Photo Challenge

The idea is to take photos using only the features available to you on a 70’s era SLR like my OM-1. Here’s the OM-1’s feature list:

  • Manually adjustable shutter speed (dial on camera)
  • Manually adjustable aperture (dial on lens)
  • Manually adjustable focus (ring on lens)
  • Adjustable ISO (replace film with film of the desired ISO)
  • Clockwork self-timer
  • Photo capacity of 36 shots with the right film
  • Light meter (literally the only electrically powered thing on the camera)

So, here are the rules of the challenge:

  1. You’re only allowed to take 36 shots.
  2. Your first viewing of the photos must be back at home after you’re done. You’re not allowed to look at them on the camera screen, or delete them.
  3. You have to choose your ISO value at the beginning of each set of 36 photos.
  4. Manual focus only.
  5. Manual exposure controls (shutter speed, aperture value) only.

If you really want to commit, you can modify #2 to be “Your first viewing of the photos must be after getting them printed at a photo store”, but I want this to be fun rather than a complete loss of modern technology.

With some setup, it’s actually pretty easy to simulate these limitations on a modern camera — otherwise it’d be way too easy to “accidentally” steal a glance at the screen after taking a photo, or to go over the photo limit:

  • For #1 I found the smallest memory card I could (2Gb) and filled it up with a big empty file to enforce the 36 photo limit.

  • For #2 I disabled “Photo Review” on my camera, so it doesn’t automatically display the photo on the screen just after it’s taken.

  • #4 was enforced for me by virtue of my old lens.

  • #5 was partly enforced by my old lens, but keeping my camera in M mode isn’t too hard.

The equipment I ended up using:

  • My Olympus OM-D E-M10
  • My old Olympus F-Zuiko 50mm f/1.8 lens
  • A Novoflex MFT-OM lens adapter
  • A 2Gb SD card I found in an old point-and-shoot camera, filled up with an empty 1.4Gb file to only allow 36 photos to be taken

My Results

Note: All photos after this point were taken under the constraints of the challenge, using the equipment mentioned above. Only minimal editing has been done (cropping, slight contrast boost, etc).

I have to admit, I wasn’t optimistic at first. I tried manually focusing with the new lens that came with my OM-D and had a really hard time. The lens is focus-by-wire and very sensitive — I had trouble finely adjusting focus at wider apertures. I tried with some of the lenses on my Canon 6D which weren’t focus-by-wire, but the manual focus ring was still too sensitive for my liking.

Then, I remembered about my old manual lenses for the OM-1 I had in a bag somewhere in the basement. Maybe they’d be better! A decent adapter to put them on my OM-D is an incredibly expensive item considering “it’s just a lump of metal” but I gritted my teeth and made the purchase…

…and what a transformation occurred!

It’s an incredible cliché to say “They don’t make them like they used to!” by holy crap, this… thing, this thirty year old, dusty, dented thing absolutely transformed my dinky little camera into something that felt meaningful. The hyper-sensitive and lifeless manual focus on my modern lens was replaced with a mechanical focus ring that feels beautiful to the touch, gliding purposefully but smoothly under my fingers. Suddenly, focusing manually was a wonderful experience again.

Suddenly, I was excited about this challenge. Time to go take some pictures!

Now, I’ve never been “that guy” — you know, the one that has his camera on high-speed continuous shooting and takes at least six shots of everything he wants to take a picture of, but I’m completely used to having a 32Gb memory card in my camera, allowing for over 1,000 shots before I need to start worrying about free space.

However, having a limitation of 36 photos transforms the experience, especially when combined with the fact I can’t delete or look at the photos I take. Suddenly, the shutter click of the camera becomes the solid thunk of a letterpress machine, a mechanical and meaningful action that you’re acutely aware of. Every photo becomes important.

In the few hours I spent doing this challenge the other day, during an afternoon out driving through a nature reserve to Nyköping, I ended up skipping a ton of photos that I’d normally have taken because they just didn’t feel right.

And you know what? I was loving every minute of it. “I can’t wait to get back home and look at these pictures,” my wife said after I took a photo of her and our dog, Chester. “Don’t look at them before I’m ready!”

Back Home

When getting back home, my wife and I excitedly loaded up Lightroom and imported the pictures. Every single one got a response from at least one of us. Not a single photo was wasted, and apart from a couple where the focus was so bad it ruined the photo, not a single one was deleted.

My computer has over 6Tb of local storage available to it right now, with infinitely more available in “The Cloud”. My $20 memory card allows my camera to take over 1,000 photos at once. And you know what? It’s too much. I have many albums in my digital photo library of trips and holidays I’ve been on that contain hundreds upon hundreds of photos. 99% of them are useless, drowned in a sea of pictures of crap I no longer care about. Sure, that lamppost might’ve looked cool at the time, but three years down the line? Its meaning is completely lost by all the other lampposts and trees and street corners in that album.

I then noticed that all of my favourite photo albums only have a few photos in them due to constraints at the time. The WWDC Photo Walk album, for instance, or the time I went to San Francisco for two days and had a couple of hours to kill before the event started.

Conclusion

I absolutely adored this challenge and will be doing it again, repeatedly, in the future. The combination of a good manual lens and the rest of the constraints added really reconnected me with photography, and by extension the world around me and how I look at it through a viewfinder.

A more startling realisation though, is that even when not doing this challenge I should be taking less photos, not more. This is going to be a hard balance to achieve, I think, since the proliferation of cheap storage and the desire not to miss a potentially good photo go really well together.

With this in mind, I went to a photo store in Stockholm yesterday and asked if they had any SD cards that were one gigabyte or smaller. I may as well have asked for a floppy disk.

You can follow along with my Time Warp challenge over in my Time Warp set on 500px.

2013 Mac Pro Review: Developer Edition

On June 10th 2013, I was sitting in the upper floor of Moscone West in San Francisco, along with a few thousand fellow iOS and Mac developers.

Roused by a somehow stirring video of nothing but words and dots, I sat back and let the RDF wash over me. iOS 7 is coming, and it’s going to be awesome!

The keynote started as normal. Retail stores. Shipped a quintillion iPhones. Yawn. Craig Federighi started demoing 10.9 Mavericks, which was pretty interesting, but the keynote was mainly for demoing consumer features. I noted a few things that looked particularly interesting and started getting excited about the new APIs that would be coming in Mac OS X.

Then, Phil Schiller came onstage. I’ve had a soft spot for Schiller ever since he pretended to be on a roller coaster while demoing iChat video effects during a keynote years ago, and I always mentally shout “PHIL!!!” when he comes onstage, but I don’t really know why. Phil started talking about minor bumps to Macbook Airs. Zzzzzz.

Wait, what’s this? Sneak peek of new Mac hardware? Finally, the new Mac Pro! Everyone cheers. I cheer. I feel a little bit sad that now the iMacs are powerful enough for everything I need I can’t justify the expense of a Mac Pro any more, but I’m excited. “We’re going to go a little over the top…” says Phil.

My ribcage starts to rumble. The bass ramping up throughout the room as the video starts was more of a feeling than a sound. Angles start flashing across the screen and not much else. Sigh. Another non-announcement.

Wait. It’s round. If it’s round it has to be…

Oh no. No no no no no.

It’s small.

“Can’t innovate any more, my ass!” Phil quipped, a smile on this face giving a way a sense of genuine pride. In seconds the non-annoucment had turned into a full-on discussion of the new machine.

Phil started talking about the design. He got to “Unified Thermal Core” and I was gone. They’ve made a spiritual successor to the G4 Cube! Phil started reeling off numbers I didn’t care about as I worked myself into a tizzy.

You see, I have a special bond with the G4 Cube. It was my first “real” Mac, and my mother bought it for me when I was a kid. I admired the beauty of the engineering of that machine. I learned to program on that machine. I cycled six miles to my local Mac reseller the day Mac OS X 10.0 came out and excitedly cycled home in the rain to install it on that machine. I’ve had many Macs since, but none had the soul and beauty of the G4 Cube. Coupled with a pile of nostalgic memories, I loved that machine so much I still have one to this day.

Generation Gap

Well, that was it. There is no way I couldn’t have one. I let my fiancée know the bad news, then tweeted what I thought was a fun but stupid screenshot of our conversation in what turned out to be my most popular tweet of all time.

Don’t worry — we still got married.

Now, time to retroactively justify spending so much money on a computer!

The Review

Disclaimer: This review is about day-to-day usage of a 2013 Mac Pro from the point of view of a Mac OS X and iOS developer, using tools like Xcode to get work done. The benchmarks here are only useful when compared to other benchmarks in this post, and my main focus is the overall “feel” of the machine compared to other modern Mac equipment rather than raw numbers. For detailed, millisecond-precise benchmarks I can highly recommend Ars Technica’s review of this machine.

What I’m Working With

Let’s begin by discussing what I’m working with. First, the hardware. I’ll mainly be comparing the Mac Pro with my work-provided 15” Retina MacBook Pro since they’re the two machines I have access to, and my wife won’t let me fill up her iMac with my crap (which, to be fair, is probably a wise choice).

2013 Mac Pro2013 15” Retina MacBook Pro
CPU3.5 GHz 6-Core Intel Xeon E52.7GHz 4-Core Intel Core i7 “Ivy Bridge”
RAM32GB 1867 MHz DDR3 ECC16GB 1600 MHz DDR3
GraphicsDual FirePro D700 6GBNVIDIA GeForce GT 650M 1GB
Storage1TB PCI-E SSD256GB PCI-E SSD
Specced Price$5,799 / £4,739 / €5,199 (DE/FR)$2,799 / £2,399 / €2,799 (DE/FR)

As for coding projects, I deal with a number of projects both at work and personally as side projects. Of these, I’ve chosen two of them to use in this review — for simplicity’s sake, I’ll call them Large Project and Small Project.

I’ve chosen these projects as I feel they reflect two common use cases — Large Project is a product typical of a cross-platform company with many people working on components of the same product, and Small Project is a typical small app that a small development shop or single developer might produce in a couple of months.

To reiterate my disclaimer above, I’m not going to go into detail about the exact number of lines of code, partly because of sensitivity concerns as I’m talking about a commercial application, and partly because it doesn’t really matter. However, to give you an idea of the size of the projects:

Small ProjectLarge Project
Derived Data*150MB3.98GB
Debug Binary Size**2MB105MB
No. of Source Files45 Obj-C .m, 30 C++ .cppI have no idea. A lot.
Benchmarked SDK & ArchitectureMac OS X 10.9 (x86_64)iOS 7.1 Simulator (i386)

* A project’s “Derived Data” is a collection of files generated by Xcode while indexing and building a project. It contains object files, indexes, build logs and various other files that allow Xcode to cache unchanged parts of a project for faster incremental building. The size was measured by deleting the project’s existing Derived Data folder, opening the project and doing a single debug build for a single architecture, then waiting for Xcode’s indexing process to complete.

** Debug binary only, no resources, for a single architecture.

The Small Project is a small Objective-C Mac app that contains 3 targets, all of which are dependencies of the main application and are built as part of a normal build. It contains some C++ code in the form of a third-party open source library, and has a nice and simple build process — open Xcode, push build.

The Large Project is a large iOS app that contains over 100 targets, most of which are dependencies of the main application and are built as part of a normal build. Some targets are heavily or completely C++ based, and the project has a very complex build process involving a wide variety of tools and build scripts in various languages.

Benchmarks

Alright, let’s get down to some benchmarking!

Build all the things! Activity Monitor wasn’t running during benchmark runs, but typical day-to-day apps were (email, Safari, Twitter, etc) to reflect a “normal” environment.

Since my Mac Pro has 32GB of RAM, I also benchmarked building the projects while using a RAM disk for Xcode’s Derived Data folder. I didn’t do this on the MacBook as 16GB isn’t enough to do this with the Large Project.

Sidebar: If you’re a developer reading this, I made a little command-line tool that simplifies the process of creating a RAM disk, designed to be friendly to being run at startup. You can find it over on my GitHub pages.

The builds had some Xcode debug build time optimisations applied as described over here, and are all debug builds for a single architecture.

Small ProjectLarge Project
MacBook Pro9 seconds6 minutes, 2 seconds
Mac Pro (SSD)6 seconds3 minutes, 58 seconds
Mac Pro (RAM Disk)5 seconds3 minutes, 40 seconds

As you can see, the Mac Pro builds projects around a third faster than my MacBook, which, in itself, isn’t all that surprising. With the Derived Data folder placed on a RAM disk, the Mac Pro is 40% faster than the MacBook.

One nice thing to note is that while doing these benchmarks, I had all six cores of the machine pegged at 100% for towards an hour. During that time, the fans of the Mac Pro barely made a whisper — a welcome change from the screaming fans of the MacBook.

A Note On Release Builds

I considered doing benchmarks using release builds as they’ll be slower as optimisation is CPU-intensive, and if you’re building for multiple architectures build time will almost increase linearly (around twice as long for two architectures, three times as long for three, etc). As a rough guide, a typical release build for an iOS app that supports both arm64 (iPhone 5S and iPad Air) and armv7 (everything else at the time of writing) will take roughly 2.5x as long as a single-architecture debug build.

However, this review is focusing on a developer’s day-to-day workflow rather than build server duties. However, I did do a couple of release builds of the Large Project, and you can expect speedup to be similar to that of debug builds.

Day-To-Day Workflow in Xcode

This is where things get interesting. Clean builds only tell a small portion of the story — day-to-day, clean builds are somewhat of a rarity. Instead, we make many small incremental builds as we write some code, make sure it builds, then test the changes out by running the application or running unit tests.

My MacBook is my daily work machine, and we’ve been at each other’s side for a year or so now. I know it in and out, and until I started working on the Large Project with my Mac Pro, it felt fine.

A typical small task I might do involves finding a file, opening it up and finding the method I need to work on. Then, I’ll need to quickly look up the API reference of something I need to add to that method, then write the new code and test it.

It goes like this:

  • Command-Shift-O to open “Open Quickly”.
  • Start typing the class name HTTPIma….
  • When the file comes up in the list, press Return to open it.
  • Navigate to the method I need.
  • Declare an instance of the new API I need to use: NSURLConnection *connection;.
  • Command-Option-Click the NSURLConnection name to open its header in the Assistant Editor.
  • Read the documentation and amend my code accordingly.
  • Close the Assistant Editor.
  • Run the application and test the new code.

Xcode’s “Open Quickly” panel

After a week using the Mac Pro and doing this regularly, I tried it again on my MacBook.

  • Command-Shift-O to open “Open Quickly”.
  • Start typing the class name HTTPIma….
  • When the file comes up in the list, press Return to open it.
  • Open Quickly is still processing my typing, so by the time the Return registers, a different file is selected.
  • Open the wrong file. Grumble.
  • Repeat, this time waiting until Open Quickly has settled down.
  • Navigate to the method I need.
  • Declare an instance of the new API I need to use: NSURLConnection *connection;.
  • Command-Option-Click the NSURLConnection name to open its header in the Assistant Editor.
  • Beachball.
  • 5 seconds later, the Assistant Editor appears.
  • Read the documentation and amend my code accordingly.
  • Close the Assistant Editor.
  • Beachball.
  • 5 seconds later, the Assistant Editor disappears.
  • Run the application and test the new code.

My MacBook can’t possibly be this bad, can it? After working on the MacBook for a few hours, I got used to it again and realised that it didn’t seem slow before because I’d briefly do something else while waiting for Xcode to catch up — glance at Twitter, take a sip of my drink, etc.

My whole Xcode experience is like this on my MacBook with the Large Project. Getting to the Build Settings pane from a source editor takes a good few seconds as it takes time to bring up each new panel as you navigate there. After a year of nothing else I’d gotten so used to it I didn’t even notice it any more.

I’ve found this week with my Mac Pro to be far more productive than working with my MacBook. It may partly be due to the fact I’m also working from home, away from the distractions and annoyances of the office, but the fact I don’t have time to glance at Twitter or sip my drink as I navigate around certainly helps keep my concentration sharp.

It’s important to note that only the Large Project makes my MacBook behave this way. Working on smaller projects, including work projects much larger than the Small Project I talk about here, the Xcode experience is as fast and painless as it is on the Mac Pro.

Day-To-Day Workflow in Other Development Tasks

I don’t really use huge datasets in anything other than Xcode, so nothing surprising here. grep is noticeably faster on the awesome SSD, as is switching around branches.

One thing that is nice is the ability to run several virtual machines at once without having to care about system resources. This is particularly handy when testing features that involve remote controlling — I can have multiple iOS Simulators running at once without problems.

Conclusion

If you have a reasonably modern, well-specced machine and are bumping into its limits, the Mac Pro gives a surprising amount of extra freedom that I didn’t expect. My MacBook isn’t a bad machine at all, and I just assumed the large project I work with would bring Xcode to its knees on anything. I’ve felt a genuinely large improvement to my day-to-day productivity on the Mac Pro, to the point where working on my MacBook feels clunky and annoying.

If your current workload is bringing your computer to a grinding halt, you might find the Mac Pro gives a refreshing freedom to your day-to-day workflow. If that’s the case, I’d really recommend it.

Otherwise, I’d really struggle to justify the cost purely on a development basis and have a really hard time imagining an indie developer or small development shop generating a project large enough to see the benefits — especially since the newer iMacs and MacBook Pros are excellent development machines and give a great price-performance ratio.

In short, if you have a modern machine and aren’t already thinking “I really need something more powerful than this”, the Mac Pro is really hard to justify.

Unless you loved the G4 Cube — then you should buy one anyway, because the G4 Cube was awesome.

Perhaps I should get that printed on a mug.

Xcode Bots: Common Problems and Workarounds

I love Continuous Integration. I’ve never bothered with it for my side projects since it didn’t seem worth the effort, but now it’s built into Xcode I thought “This is Apple! Two clicks and I’m done, right?”

Wrong.

I spent a few hours battling with the Xcode service on my Mac OS X Server, and below I detail workarounds for the problems I encountered getting my projects to build.

Don’t get me wrong — I love Bots, and I’m sure that in a few months these issues will be fixed as they’re getting tripped up by very common situations. Remember to file those Radars!

After a little work, my main side projects are continuously integrated. Hopefully this post will help you do it faster than I did!

Note: All these tips assume you’re using git. If not, well, you’re on your own!

Repositories that require an SSH key to access

If you host your repository on GitHub or similar, you’re likely to use an SSH key to access your code — especially if you have a private repository. GitHub will actually let you check out over https, but where’s the fun in that when you can follow this simple seven-step process?

1) If your SSH key requires a password, you need to make a password-less version of it using ssh-keygen -p (when it asks for a new password, just hit return).

2) In Xcode, choose Product → Create Bot…, name your Bot and untick Integrate Immediately and click Next.

3) In the next pane, choose Guest as your authentication method. This will succeed on your local machine since you have your keys installed. Continue through and create your Bot.

4) Open the Server application and find your new repository and edit it. Change the authentication method to SSH Key and the user name to the correct value (this is git for GitHub).

5) At this point, you’ll notice a new SSH key has been created for you. Either upload the public key to your host, or click Edit… to replace it with your own password-less keys.

6) Save your changes, return to Xcode and click “Integrate Now” in your new Bot.

7) Success!

8) File a Radar asking Apple to add the SSH Key authentication option to Xcode itself. Feel free to duplicate mine: #15184645: Bots: Can’t enter an SSH Key to authenticate a repository in Xcode.

It’s worth noting that you can make this process slightly simpler by manually adding the repository to the Xcode service in the Server app before creating your Bot. This way, you can set up your SSH keys right away. Make sure, however, that you get the URL of the repository to match the URL you have checked out on your local machine, otherwise Xcode won’t pick it up.

Code Signing and Provisioning Profiles

Ahh, Code Signing. It just got easy, and now there’s a new spanner in the works.

Bots run their builds in their own user. This means that even if they build fine when you log into your server, they may fail to build in the Bot. The tricky part here is making your private keys available to the Bot — downloading the certificate from Apple’s Developer Center doesn’t give you back the private key, so you’ll need to import it from a machine with a working build setup using Xcode’s Developer Profile Export/Import.

Once you can log into your server and build your project with signing, a simple way to allow Bots to access your signing certificates is to copy them from your user’s Keychain to the system Keychain.

Make sure you copy the private key by dragging it to the System Keychain. The certificate will be copied as well.

Once you have your certificates in the System Keychain, your Bots will be able to access them.

Adding your Developer Team to the Xcode service in the Server app should automatically deal with Provisioning Profiles. If not, put your profiles in /Library/Server/Xcode/Data/ProvisioningProfiles.

Submodules

Submodules work fine as long as you don’t have a detached HEAD. If they require SSH keys to access them, after creating your Bot you’ll need to go over to the Server app and manually set up the keys for each submodule.

If you’re not sure if you have a detached HEAD, you can check from the command line or Xcode. On the command line, run git status in each submodule. If you get something like # HEAD detached at 89970d0, you’re in a detached HEAD. From Xcode, try to create a Bot. When it’s time to define authentication for each repository, Xcode will tell you the state of each submodule.

If any of your submodules read “(detached from…)”, they won’t work.

To fix this, you need to get back on a branch. The quickest, hackiest way to do this is to run git checkout -b my-awesome-branch in the submodule to make a new branch. Once that new branch is pushed and available on your remote, your Bot will be able to check it out correctly.

Note: I’ve noticed that Xcode 5.0.1 doesn’t seem to refresh branch information quickly. Try relaunching Xcode if things get weird.

Don’t forget to file a Radar! Again, feel free to duplicate mine: #15184702: Bots: Checkout failure in git projects that have submodules with a detached HEAD.

I made a mistake and now I can’t fix the repository from Xcode!

If you accidentally added a Bot with incorrect authentication details or some other mistake, deleting the Bot may not fix it. You also need to delete the repositories from the Xcode service in the Server app. When you do that, Xcode will offer to reconfigure them when you create a new Bot.

The Future

Despite these niggles, I really love Continuous Integration and Bots. I’ll try to update this post as I find more issues, and hopefully as the Bots feature matures and these issues go away. Follow me on Twitter to get update announcements.

Updates

  • October 9th, 2013: Added Radar bug numbers for SSH keys and submodules.

Summer Experiment: Hacking, Agile Style

I’ve been working at Spotify for two and half years now. Previously, I ran my own software company for a number of years. Previously to that, I was in full-time education from the age of… well, whatever age you start going to nursery school.

One big thing I’ve learned since being at Spotify is actually how to make a product. My Computer Science degree taught me how to write programs, and from that I went straight into shipping my own software. Honestly, now I’ve worked at Spotify for a while I realise what a miracle it was that we even shipped anything back then, let alone earn enough money to support multiple employees.

Taking a Break From Programming? Let’s Write a Program!

Like all good programmers I know, I enjoy working on the odd spare-time project now and then. What typically happens in my case is that I sit down, go “Oh, I know!” and start coding. What I normally end up with is a working application that is clearly written by an engineer — it works just fine, but looks like someone fired the AppKit Shotgun at the screen.

The most recent example of this is my new camera. After writing a blog post complaining about how it was stupid for having Facebook and Twitter built-in, I set about sort of reverse-ish engineering the protocol it uses so I could remote control it and download images from the comfort of my chair. The protocol is PTP over IP — PTP is a documented standard, and the “over IP” part is pretty standard too, which is why I hesitate to say I reverse-engineered the protocol. However, the proprietary extensions added by Canon are largely undocumented, which is where I’ve added new knowledge to the area.

After a couple of weekends with Wireshark and Xcode, I had an Objective-C framework with a reasonably complete implementation of the PTP/IP protocol — enough to get and set the various properties of the camera, stream the live view image, perform commands and interact with the filesystem.

A typical side-project: function, but no form.

After a few weeks of doing basically nothing on the project, I went to WWDC and saw Apple’s iOS 7 announcement. After years of being a “Mac and Frameworks” guy, I finally started getting excited about iOS UI programming again, and this camera project seemed like a great way to get back into making iOS apps and learning all the cool new stuff in iOS 7.

However, wouldn’t it be nice to actually make a product rather than an engineer’s demo project?

Something Something Buzzword Agile

At Spotify, we use Agile. I’m not a die-hard fan of Agile or anything (perhaps it’s just our implementation of it), but I do appreciate how it lets you be organised in how you get work done and how it gives you a picture of what’s left to do.

So, during my July vacation I set aside two weeks with the intention of making a little iPad app to remote control my camera. Rather than immediately creating a new project and jumping into code like I normally do, I decided to employ some techniques — some Agile, some plain common sense — to manage my progress.

Step 1: Mockups

First, I spent a couple of days in Photoshop making mockups of what I wanted to do. The logic behind this was to try and avoid the “blank canvas” feeling you get a few moments after creating a new project in Xcode. With a few mockups under my belt, I would hopefully be able to dive right in without wasting time implementing a UI that I’d have easily seen was unworkable if I’d simply have drawn a picture of it. Indeed, my first idea turned out to be unusable because I’d assumed the camera’s image would fill the iPad’s screen:

Originally, the app’s controls would flank the image vertically, hiding and showing with a tap.

However, when I actually imported an image from my camera it was obvious that layout wouldn’t work. I then spent a little while figuring out how to best deal with an image with a different aspect ratio to the iPad’s screen.

A few mockups refining the main UI. The challenge here is that the aspect ratio of the image coming from the camera isn’t the same as that of the iPad.

Step 2: Insults

When working on anything, having an open feedback loop with your peers is essential. With a project that needs to be done in two weeks, that loop needs to be fast and efficient. Unfortunately, I learned rather quickly at Spotify that in a normal working environment this is completely impossible — apparently, “That idea is fucking stupid and you should be ashamed for thinking otherwise” is not appropriate feedback. Instead, we have to have multi-hour meetings to discuss the merits of everything without being assholes to one another. Coming from the North of England, this came as a huge shock to the system — being assholes to one another is pretty much the only means of communication up there.

For this project, I enlisted the help of Tim (pictured above, enjoying a game of cup-and-ball). Tim and I are great friends, and as such hurl abuse at one another with abandon — exactly the traits required for an efficient feedback loop. Indeed, throughout the project he’d belittle and insult my bad ideas with such ruthless efficiency that I never wasted more than an hour or so on a bad idea.

This is basically the “rubber ducking” theory, except that the duck is a potty-mouthed asshole who isn’t afraid of hurting your feelings.

Step 3: Planning and Monitoring Tasks

This is the typical Agile stuff — I created a note for each task I needed to do in order to have the application I wanted and placed them on a board with Waiting, Doing and Done columns on it. On each note was an estimate on how long I thought that task would take in days, with a margin of error on tasks I wasn’t that sure about — mainly those that involved more protocol reverse-engineering with Wireshark, since I hadn’t figured out the advanced focusing and auto exposure parts in the protocol yet.

Once I’d finished my board, I had between fifteen and twenty-five days worth of work to do in ten days. Obviously that wasn’t going to happen, but it was encouraging that everything that looked like it wouldn’t make the cut was an advanced feature rather than core functionality.

Step 4: Programming!

Finally, after three days of mocking and planning, I pushed “New Project” in Xcode and started coding. This seemed like a lot of honest-to-goodness work for what is supposed to be a fun side-project!

Two Weeks Later…

As a bit of a side note: It’s been a long time since I wrote a “proper” application for iOS (my last complete iOS app ran on the original iPhone), and I did everything the new way: it’s all Auto Layout with roughly half of the constraints done in Interface Builder and the rest in code, and there isn’t a drawRect: in sight. I had a lot of fun learning about the new stuff in iOS 7!

But, the golden question is… was adding three days of overhead to plan out what is ostensibly a throwaway side project worth it? Without it, I’d have had thirteen days to code instead of ten, and as a programmer I enjoy coding a lot more than I do planning and drawing boxes in Photoshop.

The answer, much to my surprise, is an unreserved YES.

Progress

I greatly enjoyed the sense of progress moving notes across the board gave, especially when a task took me less time to implement than I’d estimated. It also gave me goals to work towards — having a task I was really looking forward to implementing on the board made working on the boring notes to get there go much faster.

The Board is the Truth

Those little sticky notes really stop you from cutting corners, and cutting corners is what differentiates a hacky side-project from a polished product. For example, one of the most challenging things I encountered in the project was decoding the proprietary autofocus information the camera gives over PTP/IP. There are two main modes, one of which involves the user moving a single box around the screen and having the camera autofocus in that, the other of which user choses from a set of fixed points that correspond to dedicated autofocus sensors in the camera.

The “single box” method was simpler to implement, and I implemented both the protocol work and the UI for it in a day. At this point I was tempted to move on to something else — I mean, you could control focusing now, right? — and without that sticky note on my board I would have done so. After a bit of convincing by Tim, I just couldn’t bring myself to lie to my board and I spent two days implementing the other autofocus method. I’m really glad I did, because I had a ton of fun and ended up with a much more polished product.

Contrast-detect autofocus was fairly simple to implement, as it’s the same on every camera — a single rectangle is defined, which can be moved around the image to define the autofocus area.

Phase-detect autofocus was much harder to implement, mainly due to the focusing point layout — it’s different on every camera. My camera only has nine points, but high-end cameras can have many, many more. This means parsing the autofocus info from the camera properly, as it’ll have different data in it depending on which camera is used.

Statistics

By the end of the two weeks, my application was in a state in which I could use it to take a photo of the board!

Cheesy action-shot…

…of me taking this photo of the board.

As you can see, I added 1.5 days worth of work during the week and none of the advanced features got implemented, so all I have is a relatively basic remote control. However, this remote control is much, much more polished than it would have been if I hadn’t planned it all out in advance, and I’m much more pleased with what I ended up with vs. an unpolished project with half-working advanced features.

The tasks that got completed are as follows:

TaskEstimated Actual Date Completed
Live View Image 0.5 0.5 23 July 2013
Connect/Disconnect UI2 1.5 23 July 2013
Grid0.5 0.5 23 July 2013
Histogram1 1 24 July 2013
Half/Full Shutter UI1 1 25 July 2013
Property Controls2 1 29 July 2013
Metering2 ± 1 1 30 July 2013
AE Mode Display0.5 0.5 30 July 2013
Exp. Compensation0.5 0.5 30 July 2013
Live View Focusing2 ± 1 2 1 Aug 2013

I did twelve estimated days worth of work in nine actual days, and I either estimated tasks correctly or overestimated how long they’d take. The three tasks I did added up to one and a half days, and they’re highlighted in the table above.

Conclusion

I actually like Agile in this setting a lot more than I do at work. I get to reap the benefits of organising my work without the tedium of the bureaucracy that you encounter in multiple-person teams of people you’re contractually obliged to be nice to. This really shows in my output — the app I’ve made is really going in the direction a real product might, and if I decide I’d like to put this on the App Store I can just pick it up and keep going without having to go back and fill in all the shortcuts I would’ve made in a typical side project.

Most importantly, though: I had a lot of fun doing this — in fact, more than I normally do when working on side projects in a concentrated manner like this. Having mockups to work from and a visualisation of your progress made this project an absolute blast for me.

The Sine of Life

Note: This post is a personal piece and discusses death. If you’re here for programming and monster trucks, you may want to skip this one.

Nearly three months ago I was sitting here in my chair, writing a blog post about some of the lessons I’d learned from my failed software company. It discussed how, despite having a pretty crappy time of it, I’d come out of the process a smarter and better person. “Things are now looking up!”, I wrote, and discussed how my late father’s entrepreneurial spirit had been passed on to me to the point where I was looking forward to maybe starting another business in the future — one that would surely do better than my first! At the very least, I’d be able to avoid the mistakes I’d made before.

I finished the post mid-afternoon and decided to sit on it for a day and re-read it in the morning to make sure it still made sense with a fresh pair of eyes. I was in a pretty reflective mood — this was the first time I’d really written about my Dad, his death in 1998, and his influence on my life even though he’d long passed away. I was happy — my financial troubles from the aftermath of the failed business were all but gone, I was settling down with my fiancée in a new country and we had nothing but an uncomplicated life to look forward to.

That evening, my Mother died.

Once again, I was reminded just how fragile an illusion the “uncomplicated life” really is. Somewhat terrified by the timing of everything, the blog post I wrote never saw the light of day.

Some Time Later…

Twelve weeks later, life is regaining a sort of twisted normality. Everything seems normal enough — I’m getting up, going to work, paying bills. It’s a little bit harder to gain motivation to do anything, and I’m constantly fighting a small, deep-down urge to run back to the UK and hole up in an old friend’s house and feel sorry for myself until he forcibly ejects me from the premises. However, considering that I’m now the root node of my future family, the urge to run back to a known safe place isn’t exactly shocking.

Thankfully, these minor niggles will be temporary. My Father died just when I was old enough to fully understand the magnitude of what happened, so this isn’t the first time I’ve faced my own mortality — something that invariably happens when a loved one passes. As easy it would be to fall into a descending spiral of pity right now, I find myself compelled to go the other way — to embrace that my time in this world will be short and strive for something amazing.

I’m 28 and I’ve already done quite a lot. In the past fifteen years I’ve formed a successful company and enjoyed the spoils of a great wage. I’ve lost that company and endured the defeat of near bankruptcy. I’ve moved countries. I’ve been through the devastation that is a parent dying — twice. I’ve fallen in love. I’ve lived the lonely single life.

What persists through this chaos is not the sorrow of lost parents, the crushing pressure that is £50,000 of debt after having lost your entire income stream, or the pain of a poorly managed company failure severing friendships. What persists are happy memories — the joy of flying to Australia for Christmas with my family and girlfriend and driving down the coast in a rented a convertible car, of celebrating a successful product launch with a friend by driving a pair of Aston Martins around a track all day, and of countless other experiences and memories I wouldn’t have otherwise had.

In life, the highs balance out the lows, much like a sine wave. Sure, some of my lows are my own fault, but that particular trainwreck had an incredible amount of highs to balance it out. I’ve picked myself up, dusted myself off and righted my wrongs — debts have been repaid, friendships repaired.

What I can’t do now is glide along in mediocrity — it’s time to ensure the upwards swing of my sine wave happens by grabbing life by the scruff of the neck and jumping in with both feet. That convertible car I want that’s entirely inappropriate for a country that spends four months a year buried under a metre of snow? Perhaps it’s time to place an order. Those app ideas that’ve been rolling around in the back of my head? Well, it might soon be time to give being my own boss another shot. What could go wrong?

Right now, I’m still feeling the effects of the low swing of the sine — there’s nothing like spending a week buried in paperwork that formalises “Both of my parents are dead, here’s a pile of figures” to the UK government to keep you in a dull mood.

Looking forward, though — looking up — I’m excited as hell.

Canon EOS 6D “Review”

I’ve been in to photography since my Dad (who was a journalist) gave me his Olympus OM-1 SLR when I was a kid, which was released in 1972 and was entirely manual — the battery was optional and only powered the light meter.

Alas, my childhood OM-1 has seen better days.

When I hit 19 or so, I got a part-time job at the now-defunct Jessops camera store, and enjoyed my first income stream and a staff discount, which accelerated my photography interest quite a bit. My next camera was the EOS 3000N, a more modern film SLR. After that, I went digital with the EOS 300D, then an original 5D, then a 7D and after a brief stint with a 60D, I’ve landed with the wonderful EOS 6D.

Now, I’m not actually going to review the camera per se — there are lots of photography review websites far more qualified to do that than me. However, I do absolutely adore this thing — its image quality is superb and it’s beautifully light and easy to carry around. The incredible silent drive mode combined with the amazingly tiny 40mm f/2.8 STM pancake lens allows me to wander around the city and take photos without a bunch of people staring at me — I feel discreet with this setup, which is something I haven’t felt in years.

“I’m not fat, I’m just full-frame!”

However, this camera is the first SLR I’ve purchased that has a built-in shelf life on some of its features, which makes me slightly uncomfortable.

The March of Technology

I have a sort of love-hate relationship with technology, specifically computing technology. I’m a programmer, so obviously computing technology and I are intertwined, but I’m a firm believer that if the user is conscious that their machine has computers in it, the designers failed unless the product is specifically advertised as THIS IS A COMPUTER.

Let me give you an example. I’ve been fortunate enough to own several different cars so far in my life, and by far my favourite is the Mazda RX-8. The RX-8 is designed to be a mechanically brilliant car, and excels in this area — you can hurl it around the track like a madman and it’ll just lap it up and egg you on. When you’re driving, it’s all mechanical — a stubby gearstick sticks up directly from the gearbox, the steering has a very direct connection to the road, and on the dashboard you have a thing that tells you how fast the engine is going, another thing that tells you how fast the car is going, and not much else.

The RX-8’s dashboard.

Underneath this is an incredible amount of computing power trying to make sure I don’t slam face-first into the nearest wall — computers making sure the engine is mixing fuel correctly, computers to make sure the car goes in the direction I tell it to, and so on. However, back in the cockpit all you’re doing is giggling wildly as this glorious heap of metal seems to defy the laws of physics as it propels you around the twisty piece of Tarmac that is whatever racetrack you’re on, and there’s absolutely no indication in the car that there’s all this computing going on behind the scenes unless something goes wrong.

And that’s the way it should be.

Isn’t this supposed to be about a camera?

So, how is this relevant to the EOS 6D? Well, to me, cameras are like cars (and washing machines, toasters, fridges, etc) — they’re appliances designed to do a certain job. Your relationship with them is physical, and all of the control is done through the manipulation of buttons, switches and levers, not touch-screens or keyboards and mice. Sure, they’re computers on the inside, but if I’m conscious of that then something has gone wrong somewhere.

The 6D’s chassis (photo credit: Canon).

This is a physical device which I use to make photographs. Sure, it has a computer inside it, but that’s a detail you don’t care about — the specifications talk about the image quality and how well the camera stands up to rain, not gigahertz and RAM. In ten years, I should still be able to use it to take pictures and as long as I can get them out of the camera and into my computer, I’m set. The computing technology in this thing has one job — to help the user create the best photos they can.

It makes me slightly uncomfortable to switch on my camera to see this:

I’m not actually 100% sure why this feature irks me so much. The WiFi feature of the camera is incredibly useful — I can connect it to my computer or phone and shoot and preview shots wirelessly. However, seeing Facebook and Twitter icons on this thing crosses the line from an appliance to a computer. Before, my camera longevity concerns were all physical — how long will the shutter last? What happens if I get it wet? What if I drop it?

Now, I get to think about stuff other than my camera when thinking about my camera. Twitter are notorious about being strict with their API — what if it changes or goes away? What if Facebook goes the way of every other social network before it? That’s fine on a “big” computer — I’m already conditioned to have to update and change the software set on it, but my camera? It’s a metal box with buttons and switches on it, and I shouldn’t have to deal with “computer crap” like this.

Conclusion

Well, the EOS 6D is a great camera and you should all go buy one. However, seeing features like Facebook and Twitter integration in it make me worry about a future filled with appliances whose feature sets have shelf lives. It’s not just this camera, though — the whole industry is going this way, even cars. The Tesla Model S has Google Maps built right into the dashboard, for example.

My Olympus OM-1 is still exactly as functional as it was when it came out forty years ago. Will I be able to say that about my 6D forty years from now? How about if I bought a Model S? It seems that as technology advances, previously immutable appliances like cameras and cars are getting caught in the net of rapidly obsoleting technology.

Then again, maybe I’m just getting old. My first camera didn’t even require a battery, so obviously my idea of what a camera should be is biased towards a mechanical interface, and memories of mechanical cameras are going the way those of the floppy disk are.

It’s Alive, but Still Very Stupid

Well over a year ago, I blogged about starting a project in which I replace a radio-controlled car’s guts with an Arduino and have it be able to navigate to a given GPS location.

Well, that project is finally underway.

Hardware

It very quickly became apparent that an Arduino wouldn’t cut it for the kind of computational work I want to do, mainly because of the tiny amount of RAM it has. I ended up with a pairing of a Raspberry Pi and an Arduino Uno. The Arduino’s job is to interface with the various sensors on the car and pass that information back to the Pi, which has a lot more resources for doing computation.

Note: This chapter is a fairly quick overview of how the car is put together. You can find a shopping list with exact components at the end of this post.

Arduino

The Arduino has a prototype board attached to it, which on the underside has two three-pin connectors for connecting the car’s servos (one for speed, one for steering). The car’s speed controller is connected to the battery and provides the Arduino with power.

The top of the board (which is very messy — I intend to build a much neater one) hosts an accelerometer as well as a few cables for powering the Raspberry Pi, powering the Ultrasonic Sensors and reading data from the Ultrasonic sensors.

Black: Raspberry Pi power, Yellow and white: Ultrasonic sensor power, White four-lane: Ultrasonic sensor data, Raised red board: Accelerometer.

There are four Ultrasonic sensors mounted on the car’s body — three at the front and one at the rear. All the cabling for these sensors end up at a pair of connectors on the inside of the roof, which allows the body to easily be separated from the chassis when needed.

Leaving the body clear so you could see the electronics seemed like a good idea at the time, but instead it all looks messy and is really hard to photograph. Lesson learned!

The Arduino and prototype board are mounted inside an Arduino case that’s attached to the car’s chassis with zip ties. The case has a lid, but I’ve left it out of the photos to illustrate what goes there.

The vertical posts support the body, which rests on the clips. They can be raised to give more room.

Raspberry Pi

The Raspberry Pi is mated with a UI module from BitWizard, which hosts a 2x16 character LCD display, six buttons and a few breakout connectors for various serial busses.

Raspberry Pi with attached UI module. The garbled character to the right of the up arrow should be a down arrow, but there seems to be a bug in my custom character code!

The Raspberry Pi connects to the Arduino twice — once for power from the Arduino and once via USB to communicate with it. When it’s all assembled, it gets rather messy!

Thankfully, with the body on, it’s a lot cleaner. The final part is to find a housing for the Raspberry Pi and a place to mount it on the car itself.

Here’s a diagram of how everything fits together. Clear as mud!

Software

Arduino

The Arduino is running a very simple loop that polls the attached sensors and writes their values out to the serial port. ACCEL: lines are accelerometer readings in G, and DISTANCE: lines are ultrasonic sensor readings in cm.

1
2
3
4
5
6
7
8
9
10
11
12
ACCEL: 0.06,0.05,0.89
ACCEL: 0.07,0.05,0.90
ACCEL: 0.07,0.05,0.90
ACCEL: 0.06,0.05,0.90
ACCEL: 0.06,0.05,0.88
DISTANCE: 89,111,32,15
ACCEL: 0.07,0.05,0.89
ACCEL: 0.07,0.05,0.90
ACCEL: 0.07,0.04,0.90
ACCEL: 0.07,0.05,0.90
ACCEL: 0.07,0.06,0.90
DISTANCE: 89,111,32,15

Sample Arduino output.

In addition, the Arduino listens for input on the serial port for setting speed and steering values for the servos. This is not unlike the protocol used in my Arduino LED project, with two header bytes, a byte for the steering angle (0 – 180), a byte for the throttle angle (0 – 180) and a checksum byte.

Main Software Stack – Raspberry Pi

Everything so far is just enabling the main software stack of the car to observe and interact with the hardware in the car.

The main software stack is written in C# against the Mono Framework. I chose this setup because it’s pretty much the only nice Object-Oriented language available with a fully featured runtime available on multiple platforms (of course, there’s also Python and Java, but I prefer C# over those two). This setup allows me to write and debug the code on Mac OS X, then copy it over the the Raspberry Pi running Debian Linux for real-life use.

At the moment, the software stack is at the point where it’s a fully functional object model wrapping all of the implementation details of the car:

  • The Sensor class tree provides objects representing the various sensors on the car, providing events for when their readouts change.

  • The Servo class provides getters and setters for adjusting the servos on the car.

  • The SerialCarHardwareInterface class implements the ICarHardwareInterface, which defines various methods for getting the sensors and servos on the car. This is split out into an interface for when I need to implement a mock car for testing AI routines without risking damage to my car or other property (it goes quite fast!).

  • The CarHTTPServer class provides a simple REST API over HTTP to allow other computers on the network to observe the car’s sensors. This is great for writing tools to visualise the car’s status graphically.

RCSensorVisualizer showing the car’s accelerometer (top) and distance (bottom) sensor readouts graphically.

  • The CarEventLoop class runs a dual loop for running AI code. The first loop is a low-latency loop that monitors the car’s sensors, which can have interrupt handlers attached to it — simple classes that decide if execution should be halted, for example if the car turns upside-down. The second loop runs on a different thread and is where the main AI processes will take place. This dual setup allows the car to detect if it’s upside-down and halt operation even an AI process is taking a long time.

  • The I2CUIDevice class provides an interface to the screen mounted to the Raspberry Pi, allowing text to be written to the screen and firing events when buttons are pushed.

  • The MenuController class as friends provide logic for presenting a menu system on the display, allowing menu item selection and navigation, as well as “screens” for presenting information or prompting the user to confirm a chosen action.

Bringing It All Together

Below is a video showing the whole lot working together. I scroll through the menus on the Raspberry Pi and observe sensor readouts as well as adjusting the steering and throttle.

The project is now ready for its next steps, which will be writing AI code to have the car navigate its surroundings. It doesn’t have GPS at the moment so it’ll be limited to “drive forwards and don’t crash into stuff” for now, but it’s a start!

You can find the code for this project over on my GitHub. It includes the Arduino Sketch, the C# software stack for the Raspberry Pi and and Objective-C Mac application for observing the sensors.

Shopping List

The project at the moment uses the following hardware: