January 16th, 2017

Excuse Me Sir, But Can I Rattle Your MacBooks?

Back in 2001 I had a G4 Cube that I loved dearly, and a then state-of-the-art iPod that plugged into one of its two Firewire ports. Unfortunately, that Cube loved to fry its Firewire ports — several trips to the repair centre meant walking miles to my friend’s house so I could rip my CDs to his second-generation iMac and then onto my iPod.

Since then, I’ve had great luck with Apple products. Apart from a PowerMac G5 that couldn’t survive having Coke poured into it and the odd iPhone that didn’t like being smashed into the ground, I’ve had 15 years of mostly trouble-free experience with Apple hardware.

Unfortunately, this has come to an end with the 2016 MacBook Pro. Now, I’m not normally one to complain about stuff on my blog, but I feel the journey I’m still undergoing with this machine is kind of fascinating — and an interesting insight into what happens when good customer service and poor products clash. Also, this is by far the worst experience I’ve had with Apple hardware in my life.

MacBook Pro #1 & #2: A Normal DoA Experience

In December, my wife borrowed my MacBook Pro for something and called to me: “Did it always make this noise?”, demonstrating a metallic, springy-sounding noise when she placed it onto a table. We shall call this metallic, springy-sounding noise Rattle A, which will be important later.

No, it did not.

A call to Apple later and a new MacBook Pro (MBP #2) is being assembled and shipped to me. Great! Unfortunately, since I ordered a machine with a custom spec, it’s coming all the way from China. At the moment, it’s no big deal — the occasional DoA product is part of life.

MBP #1’s rattle.

A couple of weeks later, the new machine arrived at my door. I unbox it, and give it a little side-to-side shake. Immediately out of the box, it makes a plasticky clonking sound which you can feel through your hands. We shall call this plasticky clonking sound Rattle B.

After some bitching on Twitter, another call to Apple and about 45 minutes on hold gets me put through to some senior department. Very sorry for my bad luck, a second replacement (MBP #3) is being assembled and shipped to me, again from China. The agent agreed that it’d be silly to transfer my data to MBP #2 when MBP #3 is on its way, so a return for MBP #2 is arranged. The next day, it leaves my house.

MBP #2’s rattle.

MacBook Pro #3: Excuse Me Sir, But Can I Rattle Your MacBooks?

This is where it starts to get a bit abnormal.

MBP #3 turns up, and immediately out of the box it exhibits Rattle B. I call Apple again, and eventually get to a nice lady in after-sales who’s very sympathetic to my bad luck, and is adamant that they’ll keep sending me MacBook Pros until I get one that doesn’t rattle.

However, I’ve been doing some of my own research and I’m starting to think that Rattle B is a systemic problem. I explain my (entirely anecdotal) thinking and we come up with a plan: I’ll go to the Apple Store and see if any machines on display there exhibit the same problem. If not, I’m just having terrible luck, right?

So, at opening time on Saturday morning I walk into the Apple Store and try to explain to the employees there that:

  1. I want to shake their MacBook Pros.
  2. I’m not crazy.

After surprisingly little convincing, they let me go ahead. In the eight MacBook Pros I tried, two of them exhibited Rattle B.

A rattling MacBook Pro at the Apple Store.

I return home resigned to having a MacBook Pro with Rattle B. Annoying, but I don’t tend to shake my MacBook Pro much, so it’s not a huge issue to live with. I take the machine out of the box, unwrap the plastic and set it down on the table.


Praying that I’m hallucinating, I pick it up and set it down again.


MBP #3 exhibits both Rattle A and Rattle B. Superb. Time for a Twitter rant.

MBP #3’s rattle.

MacBook Pro #4: Maybe I Am Crazy!

At 10am this morning, the phone rings with the promised callback from the lady I spoke to on Friday.

After explaining my results at the Apple Store and the fact MBP #3 is the worst one so far, we come up with another plan, and we see what happens when your customer service greatly outclasses the quality of your product:

MBP #4 is being assembled and shipped, again from China. However, this time it’s being shipped to the Apple Store, where I can inspect it and hand it straight off for repair if it continues to show these problems.

I’d like to repeat that last part, for emphasis: An agreed plan with customer service is for the product to be shipped to a store with the expectation that it’ll immediately go in for repair.

What Next?

If this were almost any other company (or if I were new to Apple), I’d have given up at MBP #2. However, Apple have 15 years of good experience in the bank, as well as very good customer service trying their hardest to make this current issue right.

However, all that goodwill is goneMBP #4 will be their last chance. The Apple Store is a 1hr 30min round trip from my home, something I’ll probably have to do twice — once to find out MBP #4 rattles too, and again to collect it after it’s been repaired.

Here’s a timeline, for brevity:

2016-12-17 First call to Apple about MBP #1.
2017-01-02 MBP #2 arrives.
2017-01-02 Call to Apple about MBP #2.
2017-01-04 MBP #2 is collected for return to Apple, MBP #3 is ordered.
2017-01-09 MBP #3 leaves China.
2017-01-13 MBP #3 arrives.
2017-01-14 "Excuse me, but can I rattle your MacBooks?" at the Apple Store.
2017-01-16 Call Apple, MBP #4 is ordered for delivery to the Apple Store.

Some reaction I’ve received on Twitter is questioning why I care so much about a rattle. This machine cost 32,595 SEK (~$3,650 USD, ~£2,990 GBP, ~€3,400 EUR), and for that ludicrous amount of money, I expect a computer with all of its components attached together properly. I don’t think that’s unfair, and so far Apple customer support agrees with me.

The interesting question comes if MBP #4 still rattles. While I’m fortunate that this machine isn’t (yet) my primary computer, I have a business to run and unfortunately I’m a Mac and iOS developer, which basically requires that I own a Mac. I really want this MacBook Pro to replace my iMac so I can have a more portable work machine, but if Apple can’t sell me a computer I’m happy with — what then?

In the words of the greats: I’m not angry, I’m just disappointed. Maybe I should develop for Windows Phone instead.

October 28th, 2016

Launching Cascable 2.0

Cascable is the app I’ve been working on since early 2013 — firstly as a side project, then as a full-time endeavour starting mid-2015. You can read more about this journey in my Secret Diary of a Side Project series of posts, the first one of which can be found here.

“It won’t be as stressful as the 1.0 release”, I lied to my myself as much as my wife when she asked me how I was feeling about launching Cascable 2.0 the next day. I’d woken up a couple of times during the night in the past couple of weeks gnashing my teeth, causing a big chip in one of my teeth.

The truth is, the 2.0 launch ended up being much more stressful than 1.0, although I genuinely didn’t see it coming. Cascable 1.0 was a product of a side project — it shipped a few months after I quit Spotify, and a lot of that post-Spotify time was working on ancillary details like the website, marketing, documentation, and so on.

Getting to 2.0

Version 2.0 shipped on August 11th, 2016 and was the result of nine solid months of work, starting in October 2015 with this tweet:

Nine months is a very long time to be working on a single update, and it can be really damaging to your self esteem, particularly when working alone. Roughly 300 tickets were solved between starting 2.0 and shipping it. That’s 300 issues. 300 things wrong with my code. 300 times myself or someone else had opened up JIRA and created a ticket to describe something was missing or broken with my code.

Of course, this is part and parcel of being a developer. However, you typically have other developers working alongside you to share the burden and a reasonable release cadence that (hopefully) provides real-world evidence that your work is good enough for production.

In the weeks before the launch, I didn’t feel stressed at all — we’d had a very long TestFlight period with over 100 testers over all the different camera brands Cascable now supports and all of the major issues were ironed out. I’d enforced a feature-freeze at the beginning of June, and a ship-to-App Store date of July 29th. That’s two months in feature freeze and two weeks between uploading to the App Store and releasing — plenty of time to iron out any issues before shipping, and plenty up time to iron out any App Store problems before releasing.

Plus, this time I had help in the form of Tim, who’d been diligently working away at the website for weeks — this time, it was finished by the time I’d hit code freeze and better than ever - much more content and some lovely extras like a nicely made video.

Everything should be wonderful, right? Lots of time to iron out bugs, help with shipping and over 100 people using the app for a few months should make this launch something to be excited about.

However, those nine months of JIRA tickets had taken their toll. My self-confidence was incredibly low, and I was scared to death that we’d launch and some stupid mistake I’d made would cause the app to crash for everyone, ruining the app’s (and my) credibility. Cascable would be a laughing stock, and I’d have to go find a real job again.

On top of this, with 2.0 Cascable would be transitioning from paid-up-front to free with In-App Purchases to unlock the good stuff. It’s a move we needed to make — a $25 up-front payment is an impossible sell on mobile — but a huge risk of doing this (and well-known enough that it was the first thing every developer friend I have mentioned when I told them of this plan) is receiving a massive amount of support email from free users and unfair one-star reviews.

“You realise that you’ll immediately get people downloading it without looking then leaving you one-star reviews because it isn’t Instagram right?”, said one.

As the Cascable launch approached, my belief in my own abilities was at an all-time low, and I was expecting to be buried in an avalanche of one-star reviews and email.

Launch Day

Launch day came, and the app was sitting in iTunes Connect, waiting for me to click the “Release” button. An attempt at having it happen automatically was stymied by a problem with iTunes Connect that resulted in hours on the phone with iTunes Connect support, which ended up making the problem worse. In the end, I had to yank the previous version from sale a few days before 2.0’s launch. D’oh!

This is not the history of a smooth release process!

I clicked the “Release” button, and braced myself for a horrible week.

But, the avalanche never came. Instead, we got great coverage, a big pile of downloads and some really positive reviews.

Looking back, I consider it a very successful launch. Neither my wildest dreams nor my deepest fears came true — the switch to freemium didn’t make me an overnight millionaire, but we didn’t get buried by one-star reviews and support email either.

It’s amazing what shipping code can do to your self-esteem. After a couple of quick point-releases to fix some crashes that did crop up — all of them reasonably rare, thankfully — Cascable’s crash-free sessions metric is in the very high-90% range (on the day of writing, it’s at 98.5%). Of course that can be improved, but between the subjective reviews and this objective data, I’ve completely regained my confidence that I’m able to write and ship a decent product. Hooray!

It’s worth noting again what an incredible difference having someone helping out on stuff that isn’t code. I don’t think Tim would be upset with me if I said that he’s by no means a professional website builder, nor is he a professional video editor. Yet, thanks to him, I had a burden lifted from my shoulders and Cascable’s launch had that extra layer of quality to it that I’ve never been able to achieve on my own.

So, with all of that self-congratulation out of the way, let’s look at some cold, hard data!

How did the launch actually go?

The established launch pattern for iOS apps is to have a huge launch spike that tails off fairly sharply. This “long tail” is a tough thing to endure, and can be fatal.

Our spike followed normal trends. Here’s our downloads over the first few days of 2.0:

Downloads during the launch.

However, if we compare that to the number of purchases over the same period, a couple of things stick out:

Purchases during the launch.

First, the spike for purchases was a couple of days after the spike for downloads. Second, the purchases graph doesn’t lose quite as much momentum as the downloads graph, which (along with our retention data) shows that a decent proportion of that download spike was from drive-by users — people who had seen the app as part of the initial media push, tried it once, and never used it again.

Was switching to Freemium the right thing to do?

I believe that Cascable is a pro-level tool and should command a pro-level price — particularly for a niche app in the physical photography sector. Yes, $25 is a huge barrier to entry on mobile, and our 1.x sales show that. However, the problem we need to solve is showing users that the app is worth the price it commands.

At the most basic level, yes, it was the right thing to do. Cascable is earning more money than it was than when it was paid-up-front. However, there’s a lot more to it than that!

For several months, my plan was to have the app work with basic features for free, and implement a single In-App Purchase for $25 to unlock the whole app. However, after some discussion, we ended up shipping four separate In-App Purchases, as follows:

Product Cost Description
Cascable Pro: Photo Management $10 Support for RAW images, bulk copying, filtering and searching, image editing.
Cascable Pro: Remote Control $10 Powerful camera remote control and shot automation tools.
Cascable Pro: Photo Management $10 Support for RAW images, bulk copying, filtering and searching, image editing.
Cascable Pro: Night Mode $10 A dark theme for the app.
Cascable Pro: Full Bundle $25 All of the above.

The biggest detractor to this is development complexity. Different parts of the app need different feature checks, and we need to communicate to the user what they need to purchase to get which feature in a non-confusing way. Indeed, the latter point was worrying me up until launch due to the fact we decided that creating a support article with a big-ass table to explain it all was necessary.

In practice, though, I think the user experience isn’t too bad. We’ve only had one support ticket from someone who’d accidentally bought the wrong thing so far, which makes us hopeful it isn’t too confusing for our users.

The upside to all this added complexity is that we get to reduce sticker-shock (“$25?! Screw that!”) and up-sell to the user. We’re trying to avoid the aggressive sales pitch if at all possible, and don’t start asking for money until the user wants to do something that isn’t free.

Here’s a typical flow. Feel free to download Cascable and follow along!

Here’s a typical screenshot of Cascable running as a free user. Notice there’s absolutely no indication they haven’t paid for the app.

Here, the user has encountered a feature that requires them to part with some money. At this point, we don’t pop up a store or otherwise interrupt their flow:

In some places, particularly in lists, we place a “Pro” button in place of the switch or button that would invoke a particular feature:

If they tap on a “Pro” button or a “More Information…” button, they’ll get the In-App Purchase store showing the cheapest available purchase that’ll unlock the feature they’re trying to work with, along with a little video previewing everything that purchase will unlock. The video is shipped as part of the app bundle, so there’s no waiting for it to download.

If the user attempts to purchase the presented In-App Purchase, they’ll be presented with this dialog:

This is where we get a chance upsell the user to the more expensive (but better value for money) purchase. If the user taps “View Pro Bundle”, the purchase will be cancelled and they’ll be shown the video and description of the bundle. Otherwise, the purchase of the requested item will continue.

Finally, once the user has purchased the unlock for a feature, the original message is replaced with controls for the feature itself.

As you can see, even though payment and billing logic is provided by the App Store infrastructure, there’s still a ton of work to do if you want to provide a somewhat rich In-App Purchase experience for your users. Which you do want to do — that little “Give me money!” button is difficult for users to tap!

A little extra touch we added to give some extra gratification to our paid users is a friendly, heart-adorned version of Colin (our unofficial name for the anthropomorphised camera mascot used throughout the app):

This version of Colin is slightly more whimsical than the tone of the rest of the app, but I really love this version of him, and he’s reserved just for our paid users.

So, does our store work?

The following data is taken from a five week period during that long tail after the big spike.

Over the five-week period this data is from, our average conversion ratio from viewing the store to making a purchase was 21%. This compares to a conversion ratio of 4% from all users of the app to making a purchase.

I’m pretty happy with 21% — less so with the 4%. What this data shows us is that we need to get people more interested in the expanded feature set — enough to go into the store to take a more detailed look.

Overall, our paid:free ratio is about 20%, which I don’t feel is too bad.

Does our upsell work?

This graph shows the Entry Point to the In-App Purchase store within Cascable - that is, the product they first see when the store is shown to them. Once they’re in the store, users can swipe left and right to browse all the available options, but the data for that isn’t graphed here. As you can see, the entry point is reasonably evenly spread between the three individual $10 unlocks, with the $25 bundle coming in last. This is because the only way to see the bundle first is to navigate to the “Purchases” item in Settings and tap the button next to the bundle. The rest are encountered when using the app normally.

In-App Store entry point by product over five weeks during our long tail.

This next graph shows the products purchased over the same period. As you can see, the Full Bundle significantly outperforms the other products, despite the fact that it’s more expensive and isn’t the product the user is shown first in most circumstances.

In-App Store purchases by product over five weeks during our long tail.

I think it’s a reasonable conclusion that the upsell is having a positive effect on sales. However, we don’t have enough data to say whether or not this is definitely the best approach. For that, we’d need to compare our upsell to the following scenarios:

1) What if we still had four separate In-App Purchases at the same prices, but without the upsell from the $10 ones?

2) What if there was only one $25 In-App Purchase as originally planned?

However, my feeling is that we’ve hit a nice middle-ground. With no upsell, I’m reasonably confident that we’d sell less $25 bundles, and with no $10 options I think the sticker-shock factor would be too high.

What Next?

Cascable 2.0 shipped in August , followed by an immediate feature update alongside the iOS 10 launch in September. In its current state, I consider the “2.x” app reasonably feature complete — engineering-wise, my tasks are to keep up-to-date with new cameras from our supported manufacturers, keep on top of customer requests, and regroup for Cascable 3.0.

The aim is to make Cascable AB a sustainable business. While it’s not quite there yet, we’re certainly on the right track and the income graph is creeping up towards the expenditure graph.

As tempting as it is to dive into Cascable 3.0 right now, I’ve been looking at nothing but that app for a year now, and I’m risking burnout. Instead, over the next few months we’re taking a radical departure from my own historic approach (SOLVE PROBLEMS BY PROGRAMMING!! codes harder) and will be putting effort into marketing the iOS app we have.

For me, it’s time to take a step back, hand Cascable’s reigns over to Tim for a while, and focus on the long-term future of the company in the form of other engineering projects. This way, I can come back to Cascable 3.0 fresh and excited about the new features.

With that in mind, this next couple of months will be focused on the goal of making this company sustainable in the long term in ways that aren’t adding new features to the existing app — it’s feature complete enough that adding individual features won’t make that critical difference.

First Approach: Get more people to use Cascable

First, we’re experimenting with various advertising streams to get users into the app and using it. So far, we’re only in the first phase of this and are trying out Facebook, Instagram, Twitter, Google AdWords and App Store Search ads. It’s too early to draw any conclusions from this, but it seems that App Store Search ads are significantly outperforming the rest.

Additionally, we’re reaching out to photography websites, magazines, camera manufacturers, etc to try and get coverage. It’s difficult for a tiny and unknown company like ours to wriggle through the noise, but we’re starting to get noticed.

Second Approach: Get more people to convert to paid users

We recently shipped an update to Cascable that adds an “Announcements Channel”. This allows us to publish content online for presentation to users inside the app. We’re trying to make this visible to the user without being annoying — no push notifications, no noises, no alerts. Hopefully the little unread indicator won’t be too abrasive to our users.

Our intent is to publish high-quality content roughly once per week at most, mainly in the form of previewing and linking to articles on our website about how to get the most out of Cascable’s features — for example, a detailed article on using Cascable’s automation tools to make time-lapse videos, long exposures of the night sky, and so on.

The channel allows us to present different content depending on what purchases the user has made, so for paid users we can say “Here’s how to make this awesome stuff with what you already have!” and free users we can frame it more towards “Look at the cool stuff you could do if you had this!”.

The intention is to increase conversion from free users while at the same time increasing the happiness of our paid users by helping them get the most of what they have. This will be a tricky line to walk well, though.

Third Approach: Don’t put all our eggs in the iOS basket

Relying on one platform for income gives me the heebie-jeebies, particularly when that platform is one as difficult to reliably make money on as iOS.

In a previous Secret Diary of a Side Project post, I discussed how I’ve been taking the extra effort to make sure our core camera connection stack is architected in a manner that keeps it cleanly separated from the Cascable app and fully functional on macOS as well as iOS.

With Tim working on the first two approaches, I’ve started working on branching out to macOS. Thanks to a fully functional core library, I’ve been able to cash in on this past work and start incredibly quickly — I built a functional (and reasonably polished) prototype of a Mac app in less than two weeks, and we’re aiming to ship it by early December.


As much as being an overnight success is the dream, it doesn’t tend to happen like that in the real world. After a couple of years of hard work, it looks like a sustainable business is starting to get within reach — Cascable’s progress looks remarkably similar to that of my (mostly) successful foray into indie development all the way back in 2005. In fact, Cascable is doing better than my old company was after the same time period, but back then I lived in my parents’ house basically for free — Cascable has a much higher bar to reach in order to be considered “successful”!

As always, feel free to get in touch with me on Twitter.

April 12th, 2016

Secret Diary of a Side Project: No Longer Alone

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.

It’s been nearly ten months since my last Secret Diary post, and since then I’ve been doing nothing but keeping my head down and plodding along:

  • First, I shipped a couple of bugfix updates.
  • In August 2015, I released a feature update that added some powerful new stuff.
  • In September 2015, I released a feature update that added support for some new platform goodies — WatchOS 2 and iOS 9 split screen.

Other than a couple of minor bugfix updates, there’s been nothing new released since then. So, what’s going on?


It was clear that in its current course, Cascable wasn’t going to be sustainable — a fact everyone (including myself) could see coming a mile away. A niche-level product with limited hardware support and a $25 upfront cost isn’t going to fly in today’s mobile world.

That said, the people who do buy Cascable seem to love it. I’ve had some great reviews and many lovely emails from happy users.

So, what to do? Obviously, moving to a free up-front business model and adding support for more cameras is what we do with the app (and is what I’ve been working on since December), but what about the company?

After a week or two of self-reflection and chatting with those close to me, it came down to the choice of spending my remaining budget in one of two ways:

  1. Carry on by my lonesome for three years.

  2. Hire someone for one year.

This was an interesting choice. Having the freedom to not have to care about income for three years (until mid-2019!) is an opportunity I don’t think I’ll have access to again in my lifetime. However, it severely limits the pace at which I can move and the things I can achieve with Cascable, particularly when taking into account my skill set. In the end, the choice was easy.

Employee #1

As of last week, Cascable has employees! Tim is Cascable’s Head of Stuff That Isn’t Programming, and is responsible for doing all the things I’m either bad at or don’t have time for — all the things that are actually super important for a successful business (marketing, product direction, pricing, etc etc).

Now, the thing with employees is that you no longer have the freedom to fuck around. They’re people who depend on you to have your shit together enough to run payroll and otherwise deal with the stuff that puts food on their table. In keeping with that theme, this will be the very last Secret Diary post I write - thinking about Cascable as a “side project” is completely inappropriate now other people are involved.

Thankfully, having Tim on board means that the weight is lifted from my own shoulders slightly, so I should be able to allow myself the time to write blog posts more often. Hooray!

January 3rd, 2016

Garmin VIRB XE Review Updated

Back in August 2015, I reviewed a new action camera on the market - the Garmin VIRB XE. I really liked it, and sold my GoPro cameras in favour of it. Since then, several software updates have come along, changing the experience quite a lot — particularly if you use the data recording and display features.

As such, I’ve updated my review to reflect what the camera is like in early 2016. Spoiler: It’s better!


You can find my full and updated review here.

December 6th, 2015

Sprucing Up Indoor Training with Simulated Power Data

The clocks have gone back and the nights are closing in. Here in Sweden, it’s already dark by 3:30pm!

The dark, more than the cold, severely dampens my enthusiasm for cycling in the evenings after work — the lovely path along the edge of the lake becomes a harrowing edge over a black nothingness.

So, it’s time to bring the evening rides indoors. I’m not a fan of regular exercise bikes – you have to spend silly money to get a decent one, and then you get some weird geometry. I already have a great bike that’s been perfectly set up over a period of time to provide the correct geometry for my body. Why can’t I use that?

Thankfully, there are stationary trainers that let you do just that. I have a Kurt Kinetic Rock and Roll Smart stationary trainer — it has a built-in Bluetooth power meter so I can manage workouts on my phone, and is built to allow side-to-side motion of the bike. Not only does this simulate real riding better, it allows the lateral forces I put through the bike to be absorbed by the spring in the trainer and not my rear wheel’s axle and rear triangle, putting to rest fears of stressing parts of the bike that don’t normally take those sort of forces.

Anyway! I’m all set up — this is gonna be just like riding outside!


Well, that’s boring. Why don’t I record a video of my ride to play back while I’m training? And if I’m doing that… it’d be great if I can overlay some data so I can match my pacing to the ride on the video. I use a Garmin VIRB XE camera, the software for which can import the data from my Garmin GPS to overlay my heart rate, speed, pedalling cadence and more over the video. This sounds perfect!

Unfortunately, this is where we hit a snag. The trainer I have has a “fluid” resistance unit, which ramps up resistance with speed — when I pedal fast in a high gear it’s difficult, and when I pedal slowly it’s easy. This sounds sensible enough until you realise that the hardest parts of my ride are up steep hills on off-road trails — I’m putting a ton of power down, but I’m travelling really quite slowly. This means that overlaying speed data onto my video is useless since the trainer is basically simulating a perfectly level road. What I need to overlay on my video is a readout of the actual power I’m putting out at any given moment.

I’m doing 5km/h here, but outputting nearly 300W. 5km/h on my trainer gives an almost negligible power output.

After a weekend of mucking around with several horrible looking programs, I finally managed to get a simulated-but-accurate-enough power figure into Garmin’s software, allowing me to overlay power output onto my video:

Now when riding indoors I can put my iPad and iPhone on a music stand (make sure you get a sturdy one!) and reproduce my outdoor ride by matching my live power output on the trainer to the one displayed in the video.

I love this method of training. It gives me something to look at while riding, and because it’s realtime from my ride, I get great pacing — it’s on local trails I know and ride frequently, and when I need rest stops, I’m already stopping to rest on the video.

Producing Simulated Power Data

So, how to we get that live power overlay?

The easiest option would be to buy an actual power meter for my bike. Most of them are designed for road bikes, and all of them are expensive — you’re looking at towards $1,000, which is a bit spendy for a project like this.

So, with that out, we need to simulate our power data. I use the popular site Strava to track my rides, and they provide a pretty decent-looking “simulated” power graph for each ride:

Annoyingly, though, there’s absolutely no way to get this data out of Strava in any meaningful way, so that’s out. Garmin’s similar service, Garmin Connect, doesn’t produce this data at all, so that’s out too.

Looks like we’re going to have to do this manually!


  • A video recording of a bike ride.
  • Some recorded telemetry data from that same ride, such as from a GPS unit.
  • GoldenCheetah, an open-source data management application.
  • Fitness Converter, a free application by yours truly for converting fitness files between formats.
  • Garmin VIRB Edit, a free video editor that can overlay data onto your video.


First, we’re going to load our recorded telemetry data (heart rate, speed, pedalling cadence, etc) from the GPS into GoldenCheetah, a piece of software for working this this sort of thing. Once imported, clicking the “Ride” tab should show graphs of your data:

Note: On the first launch, GoldenCheetah will ask you to set up a profile. You need to enter an accurate weight for you and your bike to get accurate power data.

Next, choose “Estimate Power Values…” from the Edit menu. Once you complete the process, you’ll see more graphs added to your data, including a “Power” graph. If you have other data to compare to, such as Strava’s Simulated Power graph, you can compare them, and if GoldenCheetah’s data is significantly wrong you can choose “Adjust Power Values…” from the Edit to move it all up or down.

Finally, choose “Export…” from the Activity menu to export the file as a TCX file.

Unfortunately, we’re not quite there — Garmin’s software can’t import TCX files, so we need to convert our new file to the FIT format. The best pre-existing solution I could find for this was really quite terrible, so I ended up writing my own (as you do): Fitness Converter.

Once the data is in the FIT format, we can import it into VIRB Edit. Since the VIRB XE camera has GPS in it, it has the accuracy to automatically sync the data from my GPS unit (now with added power data!) perfectly. If you’re not in this position, you can manually sync your data file to the video.

…aaand, we’re done. You can now add your graphs and overlays as you wish using VIRB Edit. Since speed is completely irrelevant in this instance, I leave all that out and just have a single giant power bar — it’s easy to read when working out over a constantly changing number.

Happy training!

Next training spend: a bigger screen!

August 16th, 2015

Garmin VIRB XE for Automotive and Track Days: A First Impressions Review

Update January 2016: I’ve updated this review to reflect the camera and its software after a few months and a few software updates. Happily, it’s pretty much all positive. Parts of the review that are now incorrect are still here but are struck through so you can see what’s changed.

Note: For the first part of this review, I’m going to ramble on a bit about my history with this sort of thing and why I’m so hopeful that the VIRB XE isn’t crappy for use on track days. If you don’t care, you can scroll down a bit to get to the real review.

We were totally ahead of the times, man!

I’ve always loved cars and driving. As soon as I had a car more interesting than my Mum’s 1.2L Vauxhall Corsa (SXi!) I started going on track days. As my skills and enjoyment grew I wanted to record videos of my driving to show my friends and catalogue my improvement over time, so I started to record my track driving.

But! Without data, track driving videos are boring. Check out this recent one of mine — even if you’re a car nut, I bet you won’t make it through more than a lap or two before getting bored.

Back in 2007 I was bored of my dataless videos, and as part of my final year at university, I wrote a prototype Mac application to add graphical overlays to my track day videos. It was just a prototype, but it worked great and I was really proud of what I’d made — enough that it still gets a space in my abbreviated life history.

However, while the software was ready, the hardware for gathering the data just wasn’t there. iPhones and iPads were just beginning to arrive, and the other smartphone platforms at the time weren’t quite suitable. In particular, the Windows Mobile devices used at the time didn’t have accurate enough clocks to reliably time the data, warranting a whole section in my dissertation discussing interpolating timestamps.

In 2007, no camera came close to the tiny action cameras of today (particularly in the consumer space) so I ended up using a HDV camcorder strapped into the car.

For recording data from the car I used a reasonably high-end (in the consumer space) OBD to Serial dongle that was advertised as being “high speed”. It read data from the CAN bus of my car at roughly 5Hz, which meant if you wanted to record multiple properties at once, you rapidly lost nuance in your data.

Since there was nothing like the iPad back then, I ended up using a tablet PC designed for outdoor use - it had a digital pen for input, and a special display that was readable outdoors and terrible everywhere else. This thing ran full-blown Windows XP and cost a fortune.

I had well over £3,000/$4,500 worth of big, heavy equipment. Here’s an example of what all that would get you when combined with my prototype software:


Perfectly acceptable (despite the hilariously slow data acquisition rate), but I ended up abandoning the project. Strapping all that stuff into your car was just not fun, and the marshals at most track days I went to weren’t desperately happy with the thought of that amount of stuff flying around the car if I crashed. Compare the photos above with my equipment list below and you’ll see just how far we’ve come!

VIRB XE: The Review

This review focuses on the experience the VIRB XE gives when using it to create driving videos, typically on a track day or on a road trip. As well as the camera itself, I’ll be using it with the following equipment:

  • An OBDLink LX — a Bluetooth OBD dongle for interfacing with the car.
  • A Raceseng Tug View — a tow hook with an integrated GoPro mount.
  • An Audio-Technica ATR3350 microphone and Zoom H1 audio recorder.

The camera is attached to the front of my car (along with a lot of bugs!) using the Tug View.

A Note On Audio

Garmin claims their microphone “…records clean and clear audio that cameras in cases just can’t pick up”, which is an implied bash at GoPro, I suppose. While that may be true, the interesting noises from a car come from under the bonnet or out the back, neither of which are interesting places for a camera. Therefore, this review won’t deal with sound quality.

That said, my video explaining how to get good sound quality from your car on a track day does use the VIRB XE for the clips at the end, so if you’re an expert on what wind noise should sound like, go nuts!


A Note On Video Quality

I’m not going to directly compare video quality to other cameras either — I don’t have the skill set to do a good job of it. The video quality seems great, though, and the camera does an admirable job in difficult autoexposure situations, like driving through a shady forest on a sunny day.

Pre… Impressions…?

Garmin, I’m going to level with you: paper launches suck. This camera was announced in April and I was super excited about it, thrusting cash at my computer screen with the enthusiasm of a kid in a candy store. And then you said “summer”, and my enthusiasm waned. I went to a track day in August (firmly in “summer”) and the camera still wasn’t available. “Garmin suck!” I found myself saying to my friend, grumpy that I was still waiting for the camera.

That’s a pretty negative feeling to come back from.

First Impressions

This review is going to compare to the GoPro a lot. They’re the de-facto standard in this space, and I’ve been using them for years. They have a huge amount of momentum, but I’ve actually been falling out of love with them for a little while. They’ve always been a bit fiddly, but silly design decisions like that stupid port cover and a flimsy USB connector that’s soldered (poorly, in one of mine) to the mainboard make it feel fragile, which is exactly the opposite of what you want in an outdoor action camera.

Within seconds of pulling the VIRB XE out of its box, you realise it’s different. After a couple of minutes, you get the feeling that it’s been designed with care for its intended environment — dropping off my bike into a muddy puddle.

The whole thing is really well put together. A few particular details stand out for me:

Easy to push buttons and the big chunky “record” switch and great to use with gloves on.

The screen is lovely and clear compared to that of the GoPro.

A little tray holds inserts that absorb moisture to prevent the camera from fogging. The inserts are reusable and four are included in the box (one of which I promptly lost because they’re small and I’m stupid).

All electronic interfacing is done using this external set of pins. No female ports means no ports have load-bearing flimsy soldering, no holes for water to get in, and no stupid port cover.

Sensibly, they’ve accepted that GoPro currently rule the roost in the market and the camera is directly compatible with the GoPro ecosystem of mounts.

However! It’s not all perfect.

A very minor niggle is that the “Menu” button on mine feels a bit weird. You feel it click when you push it, but nothing happens. You need to push a tiny bit harder to get the button to register.

A much less minor niggle is the cable connecting mechanism. The cable snaps on using a very rugged connector (which is great), but when I pick the camera up it disconnects as if I’d unplugged it. I can repeat this with 100% repeatability with my camera and cable, which is quite worrying. Randomly disconnecting is a great way to corrupt the filesystem. Sure, I can work around that by taking the SD card out and using a card reader, but what happens if my dog bumps my desk during a firmware update?

Hopefully, this is just a niggle with my particular camera. I’ll contact Garmin about it and update this review with their reply.

Update January 2016: The weird menu button isn’t unique to my camera. There are theories on the Garmin forums that it’s actually a half-full button like the shutter button on a camera, and there’s nothing yet assigned to a half press. Garmin’s response was that the camera was acting as normal. I haven’t actually used the cable again since this review, and I haven’t pursued it further.

Recording a Car Video

During setup, the camera created a WiFi network and paired with my iPhone perfectly, and the camera allows you to customise its SSID and password on-screen.

Next, I connected it to my OBDLink LX. It took a few clicks of the “Scan” option in the VIRB’s Bluetooth settings before it saw my OBD dongle, but once it found it the two paired instantly. While the camera was adamant it was connected to my car, the VIRB App on my iPhone reported “No connected sensors”. Thankfully the camera was right, and the data from my car was recorded perfectly. Hopefully the glitch in the app will be fixed.

I attached the camera to the front mount on my car, started my audio recorder then used the VIRB app to start the camera from my iPhone. After a little beep of the horn (for syncing my separate audio recording with the video), I set off for a 25-minute drive around a local lake.

Once home, I was able to connect to the camera using my phone and stop recording. Everything appeared to have worked just fine.

Editing a Car Video

This is where I’m ready to be let down. I wrote the app I wanted (well, a prototype of it) eight years ago, and nothing has come close since. Like the bride who’s been planning her wedding since she was a small girl, reality can never quite match up to expectation. Nobody will write the app I want.

Data and Gauges

Expectations lowered, I fire up VIRB Edit for the first time and import the recording straight from the camera.

Holy crap. With zero effort I have a full set of data and a map synced to my drive. This is wonderful!

The quality of the recorded data by the VIRB seems great — the OBD data came out perfectly despite there being a couple of metres and an engine between the camera and the Bluetooth OBD adapter, and the application managed to handle the device losing a GPS fix for a few seconds with grace, resulting in a slightly funny-looking map (bottom left of the map in the screenshot above — the road isn’t that square) but no other problems.

However, the data is a bit too perfect, and the app seems too trusting of it. In particular, G-forces. With the camera directly bolted to my car’s chassis, the camera’s internal accelerometer seems to pick up every tiny little vibration, which VIRB Edit displays without filtering as this example from a perfectly smooth road shows:

Update January 2016: I’m happy to report that this problem has been completely fixed with firmware 3.70, released in early December 2015. I was concerned that the vibrations from being directly bolted to my car with a metal mount would be too much to overcome, but with the firmware update the G-force data from the VIRB is lovely and smooth, and picks up gentle curves and speed changes just fine. You can see a before (left) and after (right) comparison below:


It’d be nice if there was an option to have the application perform a low-pass filter on the data. This would reduce the responsiveness of the data slightly, but my 1,200kg car isn’t changing direction fast enough in any axis to make that a huge problem.

VIRB Edit comes with a number of templates which work great, and a lot of individual gauges that you can customise the colours of to create your own layouts and styles.

If that’s not enough, you can create your own gauges and edit them, which is a superb feature to have for power users — I plan to make gauges in VIRB Edit to match the ones in my car, and I bet others will do that same.

Video Editing

VIRB Edit is a basic, newbie-friendly video editing application, and the features it does have work well, although I did notice a little audio hiccup during playback when two sequential clips (the camera splits recordings into fifteen-minute chunks) are placed together.

There are a number of features I need to produce my track day videos that VIRB Edit doesn’t have:

  • The ability to import a separate audio track (from my audio recorder) and precisely sync it (and keep it synced) with the audio track of the video.
  • The ability to rotate the video slightly when I mount the camera slightly off-level.

Now, I’m not saying Garmin should implement all these features — that’d be silly given the number of video editors already out there at any price range you can mention. Normally, I’d just import my video into my editor of choice and edit to my heart’s content. However, the addition of data overlays makes that problematic — if I add my data overlays in VIRB Edit then export for further editing, a number of problems occur:

  • An extra layer of encoding has happened, reducing the quality of the video.
  • The gauges are baked into the video, meaning any rotations, colour corrections, etc will be applied to them as well.

I could go the other way — import the raw video into my editor of choice, apply corrections, merge in the better audio, etc, but you still end up with an extra encoding step that reduces quality.

Solving this is actually relatively easy, and my prototype application from years ago had this built-in: several video formats and containers support videos with alpha channels. What I’d love to do is add my data overlays in VIRB Edit then export a lossless video containing only the overlays on a transparent canvas. This way, I could import the original video and the overlays into my editor of choice and keep them in separate tracks, allowing me to apply rotations and colour corrections to the video to my heart’s content. Bonus points for being able to export each overlay separately, allowing the sweet animations seen in Garmin’s own VIRB XE promotional video!

Update January 2016: I’m not sure if I was just being dumb when I wrote this review, but I’ve recently found an option in VIRB Edit’s preferences: Export transparent PNG sequence for overlays. This does exactly what it says on the tin, and after exporting a video it’ll separately export a sequence of transparent PNGs containing only the overlays. Apple’s Motion editing software picked this sequence up directly with no further action needed on my part. The only minor downside to this is that you’ll have one PNG sequence containing every single overlay, which is less useful if you want to animate them independently. This can be worked around, though, by exporting multiple times with one overlay at a time. The minor downside to that approach, though, is that there’s no option to export only the overlay PNG sequence, so you have to re-export the video itself as well. This can become a lengthy process!

Hail To The Power User

One thing I’d like to call out about this product that won’t be talked about in most reviews is Garmin’s attitude towards advanced/power users. Many companies lock away the inner workings of their products in what often turns out to be a futile effort as users tend to reverse-engineer the fun stuff anyway. GoPro’s WiFi protocol has been mostly reverse-engineered, for instance, and there are a wide number of GoPro “hacks” (which turn out to mostly be undocumented config files) to enable features like long exposures.

Garmin, on the other hand, publishes documentation for controlling their VIRB cameras on their own VIRB Developer site, and VIRB Edit has an “Advanced Editing” button on its already pretty advanced gauge editor which opens up a JSON file in your favourite text editor alongside a PDF documenting the file format.

For most users, this means nothing. However, I love this attitude — I can customise my gauges to my heart’s content and write little apps to control my camera if I want, all using tools provided to me by Garmin.


Short Version

I’ve already - and I’m not joking - sold all of my GoPro cameras.

Long Version

I bought this camera within its first week of availability in Sweden, and unfortunately these days that means software niggles are to be expected. However, I’ve owned a number of Garmin devices (and still do) and they’ve a long history of continuing to improve their products over time. My four year old GPS unit still gets regular software updates, for instance. I have a very positive opinion of Garmin as a company — they make solid products and solid software, so I’m hopeful they’ll resolve the bugs I found.

Update January 2016: I’m happy that my faith in Garmin seems to have been well placed - the more problematic software issues have been fixed by updates.

I am rather concerned about the flaky connection between the camera and its USB cable, though. This is certainly a hardware issue — I’ll contact Gamin and see what they say.

Overall, though, I love this camera and have already sold all my GoPros. The combination of its superb build quality and extra data acquisition features are killer for me, and are a joy to have after years of lacklustre GoPro updates.



  • It feels like it’s built like a tank — I love the record switch in particular.
  • Lots of thought in the design — the moisture tray and port design stand out.
  • Lovely screen compared to the GoPro.
  • Paired with my OBD dongle and phone effortlessly.
  • Directly compatible with the GoPro ecosystem of mounts.


  • PAPER LAUNCH DAMNIT! Don’t show me a product I want then wait four months to start selling it!
  • Cable doesn’t fit snugly and disconnects when I move the camera. Hopefully this is a one-off thing.
  • One of the buttons feels weird. Again, hopefully a one-off niggle. May actually be as-designed. Garmin considers it ‘normal’.
  • Proprietary cable isn’t super great when you need an emergency charge in a world of micro USB. I see why they did it and, like Apple’s Lightning, the pros outweigh the cons most of the time.
  • Only one sticker in the box. I’m prepared to go full fanboy with this thing, and I only have one sticker?!



  • Great Mac citizen — you’ve no idea how many companies ship crappy “cross-platform” desktop software.
  • Gauges functionality covers all my uses, from great looking templates through to complete and total customisability.

Bad (as of August 2015)

  • Accelerometer data needs a low-pass filter — it’s unusably noisy when the camera is bolted to my car’s chassis. Fixed with firmware 3.70.
  • Audio glitch when transitioning between clips that’ve been cut up by the camera.

Missing Features

  • Ability to export a translucent video containing only the gauges so I can edit the source video in my preferred editor and keep the data overlays clean. Feature exists, but is slightly hidden. My fault!

June 21st, 2015

Secret Diary of a Side Project: In Reality, I've Only Just Started

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.

On March 27th 2013, I started an Xcode project called EOSTalk to start playing around with communicating with my new camera (a Canon EOS 6D) over its WiFi connection.

Over two years and 670 commits later, on June 5th 2015 (exactly a month late), I uploaded Cascable 1.0 to the App Store. Ten agonising days later, it went “In Review”, and seventeen hours after that, “Pending Developer Release”.

Late in the evening the next day, my wife, our dog, a few Twitter friends (thanks to Periscope) and I sat together by my desk and clicked the Release This Version button.


I absolutely meant to blog more in the three months since my last Secret Diary post, and I’m sorry if you’ve been looking forward to those posts. An interesting thing happened — I thought I’d have way more time for stuff like blogging after leaving my job and doing this fulltime, but I’ve ended up with way less. A strict deadline and a long issues list in JIRA made this a fulltime 9am-6pm job. So much for slacking off and playing videogames!

Fortunately, though, I still have a few things I want to write about and now I can slow down a bit, I should start writing here on a more frequent basis again.


Some stats for Cascable 1.0 for the curious:

Objective-C Implementation 124 files, 23,000 lines of code
C/Objective-C Header 133 files, 2,400 lines of declaration
Swift None
Commits 670

Now, lines of code is a pretty terrible metric for comparing projects, but here’s the stats for the Mac version of Music Rescue, the last app of my own creation that brought in the Benjamins:

Objective-C Implementation 154 files, 24,000 lines of code
C/Objective-C Header 169 files, 4,100 lines of declaration
Swift This was 2008 — I barely had Objective-C 2.0, let alone Swift!

As you can see, the projects are actually of a similar size. It’s a completely meaningless comparison, but it’s interesting to me nonetheless. Back in 2008 I considered Music Rescue a pretty massive project, something I don’t think about Cascable. I guess my experience with the Spotify codebase put things in perspective.

You can check Cascable out here. You should totally buy a copy!


At NSConference 7 I gave a short talk which was basically Secret Diary: On Stage, in which I discussed working on this project.


In that talk, I spoke about a bottle of whiskey I have on my desk. It’s a bottle of Johnnie Walker Blue Label, and at £175 it’s by far the most expensive bottle of whiskey I’ve bought. When I bought it, I vowed it’d only be opened when a real human being that wasn’t my friend (sorry Tim!) exchanged money for my app.

Releasing an app is reward in itself, but there’s nothing tangible about it. Having that physical milestone there to urge me on really was helpful when I was on hour four of debugging a really dumb crash, for instance.

This weekend, that bottle was opened. It tasted like glory.

May 1st, 2015

Build-Time CFBundleVersion Values in WatchKit Apps

When building a WatchKit app, you’ll likely encounter this error at some point:

error: The value of CFBundleVersion in your WatchKit app’s Info.plist (1) does not match the value in your companion app’s Info.plist (2). These values are required to match.

Easy, right? We just make sure the values match. But… what if we’re using dynamically generated bundle version numbers derived from, say, the number of commits in your git repository? Well, we just go to the WatchKit app’s target in Xcode, click the “Build Phases” tab and… oh. There isn’t one.

So, if we’re required to have our WatchKit app mirror the CFBundleVersion of our source app and we’re generating that CFBundleVersion at build time, what do we do? First, we wonder why this mirroring isn’t automatic. Second, we try to modify the WatchKit app’s Info.plist file from another target before realising that it screws with its code signature. Third, we come up with this horrible workaround!

The Horrible Workaround

The workaround is to generate a header containing definitions for your version numbers, then use Info.plist preprocessing to get them into your WatchKit app’s Info.plist file.

This little tutorial assumes you already have an Xcode project with a set up and working WatchKit app.

Step 1

Make a new build target, selecting the “Aggregate” target type under “Other”.

Step 2

In that new target, create a shell script phase to generate a header file in a sensible place that contains C-style #define statements to define the version(s) as you see fit.

My example here generates two version numbers (a build number based on the number of commits in your git repo, and a “verbose” version that gives a longer description) then places the header into the build directory.

GIT_RELEASE_VERSION=$(git describe --tags --always --dirty --long)
COMMITS=$(git rev-list HEAD | wc -l)

mkdir -p "$BUILT_PRODUCTS_DIR/include"

echo "#define CBL_BUNDLE_VERSION ${COMMITS}" >> "$BUILT_PRODUCTS_DIR/include/CBLVersions.h"

echo "Written to $BUILT_PRODUCTS_DIR/include/CBLVersions.h"

The file output by this script looks like this:

#define CBL_VERBOSE_VERSION a6f5bd0-dirty

Step 3

Make your other targets depend on your new aggregate target by adding it to the “Target Dependencies” item in the target’s “Build Phases” tab. You can add it to all the targets that you’ll use the version numbers in, but you’ll certainly need to add it to your WatchKit Extension target.

Step 4

Xcode tries to be smart and will build your target’s dependencies in parallel by default. However, this will mean that your WatchKit app will be built at the same time as the header is being generated but aggregate target, which will often result in build failures due to the header not being available in time.

To fix this, edit your target’s scheme and uncheck the “Parallelize Build” box in the “Build” section. This will force Xcode to wait until the header file has been generated before moving on.

Step 5

Edit the build settings in your targets as follows:

  • Preprocess Info.plist File should be set to Yes.
  • Info.plist Other Preprocessor Flags should be set to -traditional.
  • Info.plist Preprocessor Prefix File should be set to wherever your generated header file has been placed. In my case, it’s ${CONFIGURATION_BUILD_DIR}/include/CBLVersions.h.

Step 6

Finally, change the values in your Info.plist files to match the keys in your generated header file. In my case, I set CFBundleVersion (also known as Bundle Version or Build depending on where you’re looking in Xcode) to CBL_BUNDLE_VERSION.

Step 7

Go to the Apple Bug Reporter and ask (nicely) they give us build phases back for WatchKit apps. You can dupe mine (Radar #20782873) if you like.

Step 8



This is horrible. We need to disable parallel builds and generate intermediate headers and all sorts of nastiness. Hopefully we’ll get build phases back for WatchKit apps soon!

You can download a project that implements this tutorial here.

March 24th, 2015

NSConference 7

“I checked the version of your presentation with the video in it, and it works fine. Shall we just use that one, then?”

Panic set in, again. Scotty was already onstage and in the process of introducing me, so I had to think fast. I’d been accepted to give a “blitz talk” — that is, a short, 10-minute long presentation — at NSConference this year, and I’d put a little video clip that at best could be described as “stupid” into my slides. I thought it was funny, but I was so worried that it’d be met with a stony silence by the hundreds of attendees that I’d also provided a copy without the video.

At least it’ll be an interesting story to tell, I thought to myself, and confirmed that I’ll use the version with the video before stepping out into the blinding lights of the stage.

Here we go!

NSConference has always been about community. I’ve been fortunate enough to attend a number of them over the years, following it around the UK from Hatfield to Reading to Leicester. I’ve met a number of friends there, and it’s always inspiring. The mix of sessions normally has a fairly even distribution of technical and social topics, and this year was no exception — some fantastic speakers gave some wonderfully inspiring talks that really touched close to home, and others gave some fascinating technical talks on the old and the new.

Rather than list them now, I’m going to do a followup post when the NSConference videos are released that’ll link to my favourite talks and discuss why I found them so great.

However, the talks are only half of it. I’m pretty shy around new people, and my typical conference strategy is to sit with people I already know during the day, then hide in a corner or my hotel room during the evenings. This time, however, I was determined to at least try to make friends, and with little effort I found myself speaking to so many new people I can barely remember them all. Everyone was so friendly and so supportive, and I had a huge number of really interesting conversations with people from all over the world.

A joke is a great way to break the ice, someone once said. I start with “The lunches aren’t so light if you go back for thirds, are they?”1, referencing the fact we were given a light lunch that day in preparation for the banquet later. Sensible chuckle from the audience. Alright, maybe my video won’t flop after all!

“Hello everyone,” I continued, “My name is Daniel and for the past four years I’ve been working as a Mac and iOS developer at Spotify. And four days ago — last Thursday — I left to become an Indie developer. Today, I’m—”

I was interrupted by a huge round of applause that went on long enough to mask my stunned silence. This is what NSConference is about: hundreds of friends and strangers coming together to support one another in whatever we’re doing. One of the larger challenges in what I’m doing is the solitude — I left a job where I’m interacting with a lot of people every day to one where I sit alone in a corner of my house. As I stand on the stage, the applause lifts me up and drives home that while I may physically be on my own, I have a huge community of peers that are right behind me and are willing me to succeed.

As the applause dies down, I do a “Thank you, goodnight!” joke to move around the stage and regain my composure. Thirty seconds later, we arrive at my stupid video.

My thumb hovers over the button to advance the slide and start the video. If I double-click it, it’ll skip the video! A moment’s hesitation…


That two second video clip got what I think was one of the biggest laughs of the conference, and I was so relieved I even started laughing at it myself.

Right! Time to get my shit together — I’m supposed to be sharing information!

At the end of the conference, heartfelt things were said onstage as the sun set on the final NSConference — there wasn’t a dry eye in the house. During this, staff handed a glass of whiskey to every single person in the audience. At the very end, Scotty held a toast, then left the stage as we clinked glasses.

The last NSConference came to a close with the sound of hundreds of people clinking glasses in toast to seven years of incredible experiences. The sound resonated around the hall for a number of minutes before eventually subsiding, and is something I’ll never forget.

As a tribute to the conference and the work the organisers put in, the community is banding together to raise money for Scotty’s favourite cause, Water.org, which has the goal of providing clean water to everyone who needs it. You can donate at the NSConference 7 fundraiser page.


  1. It should be noted that my talk wasn’t really scripted so I’m recounting what I said from memory. When the video is released it’ll likely prove that I’m misremembering my exact wording. The gist will be the same, though. 

March 10th, 2015

Secret Diary of a Side Project: The Refactor From Hell

Why I need a designer: Exhibit A.


This innocuous little button cost me a week. Let that settle in. A week.

It’s a simple enough premise — when the user gets presented a dialog like this, you should give them a way out. Presenting a button-less dialog is all kinds of scary — what if the camera crashes and doesn’t give the expected response, or any response at all? Sure, I can guard against that, but still.

So, it’s settled! I’ll implement a Cancel button so the user can back out of pairing with their camera. What a completely logical and easy thing to do.


Here’s the problem I faced:

Typically, when you connect to a camera you send it a message to initialise a session, then wait for a success response. This normally takes a small number of milliseconds, but when the camera is in pairing mode it won’t respond at all until the user has gone through a few steps on the camera’s screen.

All we need to do is sever the connection to the camera while we’re waiting, right? Easy enough. However, the architecture of my application has it working with the camera in a synchronous manner, writing a message then blocking until a response is received. All this is happening on a background thread so it doesn’t interfere with the UI, and since the camera has a strict request-response pattern, it works well enough. However, in this case, I can’t sever the connection on the camera’s thread because it’s completely blocked waiting for a response. If I try to do this from a separate thread, I end up with all sorts of nasty state — dangling sockets and leaked objects.

The solution to this sounds simple — instead of doing blocking reads, I should schedule my sockets in a runloop and use event-based processing to react when responses are received. That way, nothing will ever be blocked and I can sever the connection cleanly at any point without leaving dangling sockets around.


Seven hours later I’m sitting at my desk with my head in my hands, wishing I’d never bothered. It’s 11pm, and later my wife tells me she’d approached me to come play video games but decided I looked so grumpy I’d be best left alone. I have no idea why it’s not working. I’m sending the exact same bytes as I was before, and getting the same responses. It actually works fine until traffic picks up — as soon as you start to send a lot of messages, random ones never get a response.

Well after midnight, I throw in the towel. I’d been working at this one “little” problem nonstop for eight hours, my code was a huge mess and I almost threw away the lot.

“I’m such an idiot,” I told my wife as I got into bed, “I even wrote about this on my blog, using the exact code I’m working on as an example”.

Yup, this is that old but reliable code I wrote about a couple of months ago. The class I said I’d love to refactor but shouldn’t because it worked fine.

One way of proving a hypothesis, I suppose.

As I was drifting off to sleep, I had an idea. I decided it could wait until the morning.

I slumped down into my chair the next morning and remembered my idea. Twenty minutes later, it was working like a charm1.


So, now it’s working and a darn sight better looking than my old code. However, the two years’ worth of confidence and proven reliability that I had with the old code has vanished — it seems to work, yes, but how can I be sure? Maybe there’s bugs in there that haven’t shown themselves yet.

If You Don’t Have Experience, You Need Data

I’ve been writing unit tests here and there for parts of my app where it makes sense.

“Business logic” code for the app is simple enough to test — instantiate instances of the relevant classes and go to town:

CBLShutterSpeed *speed = [[CBLShutterSpeed alloc] initWithStopsFromASecond:0.0];
XCTAssert(speed.upperFractionalValue == 1, @"Failed!");
XCTAssert(speed.lowerFractionalValue == 1, @"Failed!");

CBLShutterSpeed *newSpeed = [speed shutterSpeedByAddingStops:-1];
XCTAssert(newSpeed.upperFractionalValue == 1, @"Failed!");
XCTAssert(newSpeed.lowerFractionalValue == 2, @"Failed!");

Parsing data given back to us by the camera into objects is a little bit more involved, but not much. To achieve this, I save the data packets to disk, embed them in the test bundle and load them at test time. Since we’re testing the parsing code and not that the camera gives back correct information, I think this is an acceptable approach.

-(void)test70DLiveViewAFRectParsing {
    NSData *rectData = [NSData dataWithContentsOfFile:[self pathForTestResource:[@"70D-LiveViewAFRects-1.1.1.dat"]];
    XCTAssertNotNil(rectData, @"afRect data is nil - possible integrity problem with test bundle");

    NSArray *afAreas = [DKEOSCameraLiveViewAFArea liveViewAFAreasWithPayload:rectData];
    XCTAssertNotNil(afAreas, @"afRects parsing failed");

    XCTAssertEqual(31, afAreas.count, @"Should have 31 AF areas, got %@", @(afAreas.count));

    for (DKEOSCameraLiveViewAFArea *area in afAreas) {
        XCTAssertTrue(area.active, @"Area should be active");
        XCTAssertFalse(area.focused, @"Area should not be focused");

Alright, so, how do we go about testing my newly refactored code? It poses a little bit of a unique problem, in that my work with this camera is entirely based on clean-room reverse engineering — I don’t have access to any source code or documentation on how this thing is supposed to work. This means that I can’t compile the camera’s code for another platform (say, Mac OS) and host it locally. Additionally, the thing I’m testing isn’t “state” per se — I want to test that the transport itself is stable and reliable, that my messages get to the camera and its responses get back to me.

This leads to a single conclusion: To test my new code, I need to involve a physical, real-life camera.

Oh, boy.

Unit testing best practices dictate that:

  • State isn’t transferred between individual tests.
  • Tests can execute in any order.
  • Each test should only test one thing.

The tests I ended up writing fail all of these practices. Really, they should all be squished into one test, but a single test that’s 350 lines long is a bit ungainly. So, we abuse XCTest to execute the tests in order.

First, we test that we can discover a camera on the network:

-(void)test_001_cameraDiscovery {
    XCTestExpectation *foundCamera = [self expectationWithDescription:@"found camera"];

    void (^observer)(NSArray *) = ^(NSArray *cameras) {
        XCTAssertTrue(cameras.count > 0);
        _camera = cameras.firstObject;
        [foundCamera fulfill];

    [[DKEOSCameraDiscovery sharedInstance] addDevicesChangedObserver:observer];

    [self waitForExpectationsWithTimeout:30.0 handler:^(NSError *error) {
        [[DKEOSCameraDiscovery sharedInstance] removeDevicesChangedObserver:observer];

…then, we make sure we can connect to the found camera:

-(void)test_002_cameraConnect {
    XCTAssertNotNil(self.camera, @"Need a camera to connect to");
    XCTestExpectation *connectedToCamera = [self expectationWithDescription:@"connected to camera"];

    [self.camera connectToDevice:^(NSError *error) {
        XCTAssertNil(error, @"Error when connecting to camera: %@", error);
        [connectedToCamera fulfill];
    } userInterventionCallback:^(BOOL shouldDisplayUserInterventionDialog, dispatch_block_t cancelConnectionBlock) {
        XCTAssertTrue(false, @"Can't test a camera in pairing mode");

    [self waitForExpectationsWithTimeout:30.0 handler:nil];

(I’m a particular fan of that XCTAssertTrue(false, … line in there.)

Next, because we’re talking to a real-life camera, we need to make sure its physical properties (i.e., ones we can’t change in software) are correct for testing:

-(void)test_003_cameraState {
    XCTAssertNotNil(self.camera, @"Need a camera to connect to");
    XCTAssertTrue(self.camera.connected, @"Camera should be connected");

    XCTAssertEqual([[self.camera valueForProperty:EOSPropertyCodeAutoExposureMode] intValue], EOSAEModeManual,
                   @"Camera should be in manual mode for testing.");

    XCTAssertEqual([[self.camera valueForProperty:EOSPropertyCodeLensStatus] intValue], EOSLensStatusLensAvailable,
                   @"Camera should have an attached lens for testing");

    DKEOSFileStorage *storage = self.camera.storageDevices.firstObject;
    XCTAssertTrue(storage.capacity > 0, @"Camera should have an SD card inserted for testing.");
    XCTAssertTrue(storage.availableSpace > 100 * 1024 * 1024, @"Camera storage should have at least 100Mb available for testing.");

Once the camera is connected and verified to be in an agreeable state, we can start testing.

  • In order to test against the case of large amounts of traffic causing dropouts that drove me to insanity that night, I run through every single valid value for all of the exposure settings (ISO, aperture, shutter speed) as fast as I possibly can.

  • To test event processing works correctly, I test that streaming images from the camera’s viewfinder works.

  • To test filesystem access, I iterate through the camera’s filesystem.

  • To test commands, I take a photo.

  • To test that large transfers work, I download the photo the previous test took - about 25Mb on this particular camera.

  • And finally, I test that disconnecting from the camera works cleanly.

As you can see, this is a pretty comprehensive set of tests — each one is meticulous about ensuring the responses are correct, that the sizes of the data packets received match the sizes reported by the camera, etc — they’re essentially an automated smoke test.

The next challenge is to get these to run without human intervention. I can’t just leave the camera on all the time — if it doesn’t receive a network connection within a minute or two of powering on it’ll error out and you need to restart the Wifi stack to connect again — something not possible without human intervention. Perhaps a software-controlled power switch would allow the tests to power on and off the camera at will. However, that’s a challenge for another day.


So. In an earlier post I talked about being restrained when you think about refactoring code, and my ordeal here is exactly why. At the beginning it looked simple enough to do, but I ended up losing way too much time and way too much sleep over it, and when it finally appeared to work I had no data on whether it was any good or not. If I’d gone through all of that with no good reason it would’ve been a complete waste of time and energy.

But! Thanks to all this work, you can now cancel out of camera pairing from your iOS device! It’s a disproportional amount of work for a single button, but that’s the way software development goes sometimes — no matter how obvious the next task might look, tomorrow’s just a mystery, and that’s okay. It’s what makes it fun!

Plus, I now have a decent set of smoke tests for communicating with a real-life camera, which is something I’ve been wanting for a long time — a nice little silver lining!


After implementing all this, I decided to have a look at how the camera’s official software approached this problem, UI-wise.

It looks like a floating panel, but it behaves like a modal dialog. There’s no way to cancel from the application at all and if you force quit it, the software ends up in a state where it thinks it isn’t paired and the camera thinks it is paired, and the two will flat-out not talk to one another.

The mobile app can’t possibly be this bad, I thought, and went to experiment. There’s no screenshot here because there is no UI in the iOS app to help with pairing at all — it just says “Connecting…” like normal and you need to figure out that you need to look at the camera on your own.

It’s like they don’t even care.

Next time on Secret Diary of a Side Project, we’ll talk about how to make the transition to working full-time on your side project at home in a healthy way, both mentally and physically.

  1. The problem, if you’re interested, is that the camera throws away any messages received while it’s processing a prior message. This was accidentally worked around in my old code by blocking while waiting for a response. The solution was to maintain a message queue and disallow a message to be sent until a response to the previous one has been received.