August 16th, 2015

Garmin VIRB XE for Automotive and Track Days: A First Impressions Review

Note: For the first part of this review, I’m going to ramble on a bit about my history with this sort of thing and why I’m so hopeful that the VIRB XE isn’t crappy for use on track days. If you don’t care, you can scroll down a bit to get to the real review.

We were totally ahead of the times, man!

I’ve always loved cars and driving. As soon as I had a car more interesting than my Mum’s 1.2L Vauxhall Corsa (SXi!) I started going on track days. As my skills and enjoyment grew I wanted to record videos of my driving to show my friends and catalogue my improvement over time, so I started to record my track driving.

But! Without data, track driving videos are boring. Check out this recent one of mine — even if you’re a car nut, I bet you won’t make it through more than a lap or two before getting bored.

Back in 2007 I was bored of my dataless videos, and as part of my final year at university, I wrote a prototype Mac application to add graphical overlays to my track day videos. It was just a prototype, but it worked great and I was really proud of what I’d made — enough that it still gets a space in my abbreviated life history.

However, while the software was ready, the hardware for gathering the data just wasn’t there. iPhones and iPads were just beginning to arrive, and the other smartphone platforms at the time weren’t quite suitable. In particular, the Windows Mobile devices used at the time didn’t have accurate enough clocks to reliably time the data, warranting a whole section in my dissertation discussing interpolating timestamps.

In 2007, no camera came close to the tiny action cameras of today (particularly in the consumer space) so I ended up using a HDV camcorder strapped into the car.

For recording data from the car I used a reasonably high-end (in the consumer space) OBD to Serial dongle that was advertised as being “high speed”. It read data from the CAN bus of my car at roughly 5Hz, which meant if you wanted to record multiple properties at once, you rapidly lost nuance in your data.

Since there was nothing like the iPad back then, I ended up using a tablet PC designed for outdoor use - it had a digital pen for input, and a special display that was readable outdoors and terrible everywhere else. This thing ran full-blown Windows XP and cost a fortune.

I had well over £3,000/$4,500 worth of big, heavy equipment. Here’s an example of what all that would get you when combined with my prototype software:

 

Perfectly acceptable (despite the hilariously slow data acquisition rate), but I ended up abandoning the project. Strapping all that stuff into your car was just not fun, and the marshals at most track days I went to weren’t desperately happy with the thought of that amount of stuff flying around the car if I crashed. Compare the photos above with my equipment list below and you’ll see just how far we’ve come!

VIRB XE: The Review

This review focuses on the experience the VIRB XE gives when using it to create driving videos, typically on a track day or on a road trip. As well as the camera itself, I’ll be using it with the following equipment:

  • An OBDLink LX — a Bluetooth OBD dongle for interfacing with the car.
  • A Raceseng Tug View — a tow hook with an integrated GoPro mount.
  • An Audio-Technica ATR3350 microphone and Zoom H1 audio recorder.


The camera is attached to the front of my car (along with a lot of bugs!) using the Tug View.

A Note On Audio

Garmin claims their microphone “…records clean and clear audio that cameras in cases just can’t pick up”, which is an implied bash at GoPro, I suppose. While that may be true, the interesting noises from a car come from under the bonnet or out the back, neither of which are interesting places for a camera. Therefore, this review won’t deal with sound quality.

That said, my video explaining how to get good sound quality from your car on a track day does use the VIRB XE for the clips at the end, so if you’re an expert on what wind noise should sound like, go nuts!

 

A Note On Video Quality

I’m not going to directly compare video quality to other cameras either — I don’t have the skill set to do a good job of it. The video quality seems great, though, and the camera does an admirable job in difficult autoexposure situations, like driving through a shady forest on a sunny day.

Pre… Impressions…?

Garmin, I’m going to level with you: paper launches suck. This camera was announced in April and I was super excited about it, thrusting cash at my computer screen with the enthusiasm of a kid in a candy store. And then you said “summer”, and my enthusiasm waned. I went to a track day in August (firmly in “summer”) and the camera still wasn’t available. “Garmin suck!” I found myself saying to my friend, grumpy that I was still waiting for the camera.

That’s a pretty negative feeling to come back from.

First Impressions

This review is going to compare to the GoPro a lot. They’re the de-facto standard in this space, and I’ve been using them for years. They have a huge amount of momentum, but I’ve actually been falling out of love with them for a little while. They’ve always been a bit fiddly, but silly design decisions like that stupid port cover and a flimsy USB connector that’s soldered (poorly, in one of mine) to the mainboard make it feel fragile, which is exactly the opposite of what you want in an outdoor action camera.

Within seconds of pulling the VIRB XE out of its box, you realise it’s different. After a couple of minutes, you get the feeling that it’s been designed with care for its intended environment — dropping off my bike into a muddy puddle.

The whole thing is really well put together. A few particular details stand out for me:


Easy to push buttons and the big chunky “record” switch and great to use with gloves on.


The screen is lovely and clear compared to that of the GoPro.


A little tray holds inserts that absorb moisture to prevent the camera from fogging. The inserts are reusable and four are included in the box (one of which I promptly lost because they’re small and I’m stupid).


All electronic interfacing is done using this external set of pins. No female ports means no ports have load-bearing flimsy soldering, no holes for water to get in, and no stupid port cover.


Sensibly, they’ve accepted that GoPro currently rule the roost in the market and the camera is directly compatible with the GoPro ecosystem of mounts.

However! It’s not all perfect.

A very minor niggle is that the “Menu” button on mine feels a bit weird. You feel it click when you push it, but nothing happens. You need to push a tiny bit harder to get the button to register.

A much less minor niggle is the cable connecting mechanism. The cable snaps on using a very rugged connector (which is great), but when I pick the camera up it disconnects as if I’d unplugged it. I can repeat this with 100% repeatability with my camera and cable, which is quite worrying. Randomly disconnecting is a great way to corrupt the filesystem. Sure, I can work around that by taking the SD card out and using a card reader, but what happens if my dog bumps my desk during a firmware update?

Hopefully, this is just a niggle with my particular camera. I’ll contact Garmin about it and update this review with their reply.

Recording a Car Video

During setup, the camera created a WiFi network and paired with my iPhone perfectly, and the camera allows you to customise its SSID and password on-screen.

Next, I connected it to my OBDLink LX. It took a few clicks of the “Scan” option in the VIRB’s Bluetooth settings before it saw my OBD dongle, but once it found it the two paired instantly. While the camera was adamant it was connected to my car, the VIRB App on my iPhone reported “No connected sensors”. Thankfully the camera was right, and the data from my car was recorded perfectly. Hopefully the glitch in the app will be fixed.

I attached the camera to the front mount on my car, started my audio recorder then used the VIRB app to start the camera from my iPhone. After a little beep of the horn (for syncing my separate audio recording with the video), I set off for a 25-minute drive around a local lake.

Once home, I was able to connect to the camera using my phone and stop recording. Everything appeared to have worked just fine.

Editing a Car Video

This is where I’m ready to be let down. I wrote the app I wanted (well, a prototype of it) eight years ago, and nothing has come close since. Like the bride who’s been planning her wedding since she was a small girl, reality can never quite match up to expectation. Nobody will write the app I want.

Data and Gauges

Expectations lowered, I fire up VIRB Edit for the first time and import the recording straight from the camera.

Holy crap. With zero effort I have a full set of data and a map synced to my drive. This is wonderful!

The quality of the recorded data by the VIRB seems great — the OBD data came out perfectly despite there being a couple of metres and an engine between the camera and the Bluetooth OBD adapter, and the application managed to handle the device losing a GPS fix for a few seconds with grace, resulting in a slightly funny-looking map (bottom left of the map in the screenshot above — the road isn’t that square) but no other problems.

However, the data is a bit too perfect, and the app seems too trusting of it. In particular, G-forces. With the camera directly bolted to my car’s chassis, the camera’s internal accelerometer seems to pick up every tiny little vibration, which VIRB Edit displays without filtering as this example from a perfectly smooth road shows:

 

It’d be nice if there was an option to have the application perform a low-pass filter on the data. This would reduce the responsiveness of the data slightly, but my 1,200kg car isn’t changing direction fast enough in any axis to make that a huge problem.

VIRB Edit comes with a number of templates which work great, and a lot of individual gauges that you can customise the colours of to create your own layouts and styles.

If that’s not enough, you can create your own gauges and edit them, which is a superb feature to have for power users — I plan to make gauges in VIRB Edit to match the ones in my car, and I bet others will do that same.

Video Editing

VIRB Edit is a basic, newbie-friendly video editing application, and the features it does have work well, although I did notice a little audio hiccup during playback when two sequential clips (the camera splits recordings into fifteen-minute chunks) are placed together.

There are a number of features I need to produce my track day videos that VIRB Edit doesn’t have:

  • The ability to import a separate audio track (from my audio recorder) and precisely sync it (and keep it synced) with the audio track of the video.
  • The ability to rotate the video slightly when I mount the camera slightly off-level.

Now, I’m not saying Garmin should implement all these features — that’d be silly given the number of video editors already out there at any price range you can mention. Normally, I’d just import my video into my editor of choice and edit to my heart’s content. However, the addition of data overlays makes that problematic — if I add my data overlays in VIRB Edit then export for further editing, a number of problems occur:

  • An extra layer of encoding has happened, reducing the quality of the video.
  • The gauges are baked into the video, meaning any rotations, colour corrections, etc will be applied to them as well.

I could go the other way — import the raw video into my editor of choice, apply corrections, merge in the better audio, etc, but you still end up with an extra encoding step that reduces quality.

Solving this is actually relatively easy, and my prototype application from years ago had this built-in: several video formats and containers support videos with alpha channels. What I’d love to do is add my data overlays in VIRB Edit then export a lossless video containing only the overlays on a transparent canvas. This way, I could import the original video and the overlays into my editor of choice and keep them in separate tracks, allowing me to apply rotations and colour corrections to the video to my heart’s content. Bonus points for being able to export each overlay separately, allowing the sweet animations seen in Garmin’s own VIRB XE promotional video!

Hail To The Power User

One thing I’d like to call out about this product that won’t be talked about in most reviews is Garmin’s attitude towards advanced/power users. Many companies lock away the inner workings of their products in what often turns out to be a futile effort as users tend to reverse-engineer the fun stuff anyway. GoPro’s WiFi protocol has been mostly reverse-engineered, for instance, and there are a wide number of GoPro “hacks” (which turn out to mostly be undocumented config files) to enable features like long exposures.

Garmin, on the other hand, publishes documentation for controlling their VIRB cameras on their own VIRB Developer site, and VIRB Edit has an “Advanced Editing” button on its already pretty advanced gauge editor which opens up a JSON file in your favourite text editor alongside a PDF documenting the file format.

For most users, this means nothing. However, I love this attitude — I can customise my gauges to my heart’s content and write little apps to control my camera if I want, all using tools provided to me by Garmin.

Conclusion

Short Version

I’ve already - and I’m not joking - sold all of my GoPro cameras.

Long Version

I bought this camera within its first week of availability in Sweden, and unfortunately these days that means software niggles are to be expected. However, I’ve owned a number of Garmin devices (and still do) and they’ve a long history of continuing to improve their products over time. My four year old GPS unit still gets regular software updates, for instance. I have a very positive opinion of Garmin as a company — they make solid products and solid software, so I’m hopeful they’ll resolve the bugs I found.

I am rather concerned about the flaky connection between the camera and its USB cable, though. This is certainly a hardware issue — I’ll contact Gamin and see what they say.

Overall, though, I love this camera and have already sold all my GoPros. The combination of its superb build quality and extra data acquisition features are killer for me, and are a joy to have after years of lacklustre GoPro updates.

Hardware

Good

  • It feels like it’s built like a tank — I love the record switch in particular.
  • Lots of thought in the design — the moisture tray and port design stand out.
  • Lovely screen compared to the GoPro.
  • Paired with my OBD dongle and phone effortlessly.
  • Directly compatible with the GoPro ecosystem of mounts.

Bad

  • PAPER LAUNCH DAMNIT! Don’t show me a product I want then wait four months to start selling it!
  • Cable doesn’t fit snugly and disconnects when I move the camera. Hopefully this is a one-off thing.
  • One of the buttons feels weird. Again, hopefully a one-off niggle.
  • Proprietary cable isn’t super great when you need an emergency charge in a world of micro USB. I see why they did it and, like Apple’s Lightning, the pros outweigh the cons most of the time.
  • Only one sticker in the box. I’m prepared to go full fanboy with this thing, and I only have one sticker?!

Software

Good

  • Great Mac citizen — you’ve no idea how many companies ship crappy “cross-platform” desktop software.
  • Gauges functionality covers all my uses, from great looking templates through to complete and total customisability.

Bad (as of August 2015)

  • Accelerometer data needs a low-pass filter — it’s unusably noisy when the camera is bolted to my car’s chassis.
  • Audio glitch when transitioning between clips that’ve been cut up by the camera.

Missing Features

  • Ability to export a translucent video containing only the gauges so I can edit the source video in my preferred editor and keep the data overlays clean.

June 21st, 2015

Secret Diary of a Side Project: In Reality, I've Only Just Started

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.


On March 27th 2013, I started an Xcode project called EOSTalk to start playing around with communicating with my new camera (a Canon EOS 6D) over its WiFi connection.

Over two years and 670 commits later, on June 5th 2015 (exactly a month late), I uploaded Cascable 1.0 to the App Store. Ten agonising days later, it went “In Review”, and seventeen hours after that, “Pending Developer Release”.

Late in the evening the next day, my wife, our dog, a few Twitter friends (thanks to Periscope) and I sat together by my desk and clicked the Release This Version button.

 

I absolutely meant to blog more in the three months since my last Secret Diary post, and I’m sorry if you’ve been looking forward to those posts. An interesting thing happened — I thought I’d have way more time for stuff like blogging after leaving my job and doing this fulltime, but I’ve ended up with way less. A strict deadline and a long issues list in JIRA made this a fulltime 9am-6pm job. So much for slacking off and playing videogames!

Fortunately, though, I still have a few things I want to write about and now I can slow down a bit, I should start writing here on a more frequent basis again.

Statistics

Some stats for Cascable 1.0 for the curious:

Objective-C Implementation 124 files, 23,000 lines of code
C/Objective-C Header 133 files, 2,400 lines of declaration
Swift None
Commits 670

Now, lines of code is a pretty terrible metric for comparing projects, but here’s the stats for the Mac version of Music Rescue, the last app of my own creation that brought in the Benjamins:

Objective-C Implementation 154 files, 24,000 lines of code
C/Objective-C Header 169 files, 4,100 lines of declaration
Swift This was 2008 — I barely had Objective-C 2.0, let alone Swift!

As you can see, the projects are actually of a similar size. It’s a completely meaningless comparison, but it’s interesting to me nonetheless. Back in 2008 I considered Music Rescue a pretty massive project, something I don’t think about Cascable. I guess my experience with the Spotify codebase put things in perspective.

You can check Cascable out here. You should totally buy a copy!

Celebrating

At NSConference 7 I gave a short talk which was basically Secret Diary: On Stage, in which I discussed working on this project.

 

In that talk, I spoke about a bottle of whiskey I have on my desk. It’s a bottle of Johnnie Walker Blue Label, and at £175 it’s by far the most expensive bottle of whiskey I’ve bought. When I bought it, I vowed it’d only be opened when a real human being that wasn’t my friend (sorry Tim!) exchanged money for my app.

Releasing an app is reward in itself, but there’s nothing tangible about it. Having that physical milestone there to urge me on really was helpful when I was on hour four of debugging a really dumb crash, for instance.

This weekend, that bottle was opened. It tasted like glory.


May 1st, 2015

Build-Time CFBundleVersion Values in WatchKit Apps

When building a WatchKit app, you’ll likely encounter this error at some point:

error: The value of CFBundleVersion in your WatchKit app’s Info.plist (1) does not match the value in your companion app’s Info.plist (2). These values are required to match.

Easy, right? We just make sure the values match. But… what if we’re using dynamically generated bundle version numbers derived from, say, the number of commits in your git repository? Well, we just go to the WatchKit app’s target in Xcode, click the “Build Phases” tab and… oh. There isn’t one.

So, if we’re required to have our WatchKit app mirror the CFBundleVersion of our source app and we’re generating that CFBundleVersion at build time, what do we do? First, we wonder why this mirroring isn’t automatic. Second, we try to modify the WatchKit app’s Info.plist file from another target before realising that it screws with its code signature. Third, we come up with this horrible workaround!

The Horrible Workaround

The workaround is to generate a header containing definitions for your version numbers, then use Info.plist preprocessing to get them into your WatchKit app’s Info.plist file.

This little tutorial assumes you already have an Xcode project with a set up and working WatchKit app.

Step 1

Make a new build target, selecting the “Aggregate” target type under “Other”.

Step 2

In that new target, create a shell script phase to generate a header file in a sensible place that contains C-style #define statements to define the version(s) as you see fit.

My example here generates two version numbers (a build number based on the number of commits in your git repo, and a “verbose” version that gives a longer description) then places the header into the build directory.

GIT_RELEASE_VERSION=$(git describe --tags --always --dirty --long)
COMMITS=$(git rev-list HEAD | wc -l)
COMMITS=$(($COMMITS))

mkdir -p "$BUILT_PRODUCTS_DIR/include"

echo "#define CBL_VERBOSE_VERSION ${GIT_RELEASE_VERSION#*v}" > "$BUILT_PRODUCTS_DIR/include/CBLVersions.h"
echo "#define CBL_BUNDLE_VERSION ${COMMITS}" >> "$BUILT_PRODUCTS_DIR/include/CBLVersions.h"

echo "Written to $BUILT_PRODUCTS_DIR/include/CBLVersions.h"

The file output by this script looks like this:

#define CBL_VERBOSE_VERSION a6f5bd0-dirty
#define CBL_BUNDLE_VERSION 1

Step 3

Make your other targets depend on your new aggregate target by adding it to the “Target Dependencies” item in the target’s “Build Phases” tab. You can add it to all the targets that you’ll use the version numbers in, but you’ll certainly need to add it to your WatchKit Extension target.

Step 4

Xcode tries to be smart and will build your target’s dependencies in parallel by default. However, this will mean that your WatchKit app will be built at the same time as the header is being generated but aggregate target, which will often result in build failures due to the header not being available in time.

To fix this, edit your target’s scheme and uncheck the “Parallelize Build” box in the “Build” section. This will force Xcode to wait until the header file has been generated before moving on.

Step 5

Edit the build settings in your targets as follows:

  • Preprocess Info.plist File should be set to Yes.
  • Info.plist Other Preprocessor Flags should be set to -traditional.
  • Info.plist Preprocessor Prefix File should be set to wherever your generated header file has been placed. In my case, it’s ${CONFIGURATION_BUILD_DIR}/include/CBLVersions.h.

Step 6

Finally, change the values in your Info.plist files to match the keys in your generated header file. In my case, I set CFBundleVersion (also known as Bundle Version or Build depending on where you’re looking in Xcode) to CBL_BUNDLE_VERSION.

Step 7

Go to the Apple Bug Reporter and ask (nicely) they give us build phases back for WatchKit apps. You can dupe mine (Radar #20782873) if you like.

Step 8


Success!

Conclusion

This is horrible. We need to disable parallel builds and generate intermediate headers and all sorts of nastiness. Hopefully we’ll get build phases back for WatchKit apps soon!

You can download a project that implements this tutorial here.


March 24th, 2015

NSConference 7

“I checked the version of your presentation with the video in it, and it works fine. Shall we just use that one, then?”

Panic set in, again. Scotty was already onstage and in the process of introducing me, so I had to think fast. I’d been accepted to give a “blitz talk” — that is, a short, 10-minute long presentation — at NSConference this year, and I’d put a little video clip that at best could be described as “stupid” into my slides. I thought it was funny, but I was so worried that it’d be met with a stony silence by the hundreds of attendees that I’d also provided a copy without the video.

At least it’ll be an interesting story to tell, I thought to myself, and confirmed that I’ll use the version with the video before stepping out into the blinding lights of the stage.

Here we go!


NSConference has always been about community. I’ve been fortunate enough to attend a number of them over the years, following it around the UK from Hatfield to Reading to Leicester. I’ve met a number of friends there, and it’s always inspiring. The mix of sessions normally has a fairly even distribution of technical and social topics, and this year was no exception — some fantastic speakers gave some wonderfully inspiring talks that really touched close to home, and others gave some fascinating technical talks on the old and the new.

Rather than list them now, I’m going to do a followup post when the NSConference videos are released that’ll link to my favourite talks and discuss why I found them so great.

However, the talks are only half of it. I’m pretty shy around new people, and my typical conference strategy is to sit with people I already know during the day, then hide in a corner or my hotel room during the evenings. This time, however, I was determined to at least try to make friends, and with little effort I found myself speaking to so many new people I can barely remember them all. Everyone was so friendly and so supportive, and I had a huge number of really interesting conversations with people from all over the world.


A joke is a great way to break the ice, someone once said. I start with “The lunches aren’t so light if you go back for thirds, are they?”1, referencing the fact we were given a light lunch that day in preparation for the banquet later. Sensible chuckle from the audience. Alright, maybe my video won’t flop after all!

“Hello everyone,” I continued, “My name is Daniel and for the past four years I’ve been working as a Mac and iOS developer at Spotify. And four days ago — last Thursday — I left to become an Indie developer. Today, I’m—”

I was interrupted by a huge round of applause that went on long enough to mask my stunned silence. This is what NSConference is about: hundreds of friends and strangers coming together to support one another in whatever we’re doing. One of the larger challenges in what I’m doing is the solitude — I left a job where I’m interacting with a lot of people every day to one where I sit alone in a corner of my house. As I stand on the stage, the applause lifts me up and drives home that while I may physically be on my own, I have a huge community of peers that are right behind me and are willing me to succeed.

As the applause dies down, I do a “Thank you, goodnight!” joke to move around the stage and regain my composure. Thirty seconds later, we arrive at my stupid video.

My thumb hovers over the button to advance the slide and start the video. If I double-click it, it’ll skip the video! A moment’s hesitation…

Click.

That two second video clip got what I think was one of the biggest laughs of the conference, and I was so relieved I even started laughing at it myself.

Right! Time to get my shit together — I’m supposed to be sharing information!


At the end of the conference, heartfelt things were said onstage as the sun set on the final NSConference — there wasn’t a dry eye in the house. During this, staff handed a glass of whiskey to every single person in the audience. At the very end, Scotty held a toast, then left the stage as we clinked glasses.

The last NSConference came to a close with the sound of hundreds of people clinking glasses in toast to seven years of incredible experiences. The sound resonated around the hall for a number of minutes before eventually subsiding, and is something I’ll never forget.

As a tribute to the conference and the work the organisers put in, the community is banding together to raise money for Scotty’s favourite cause, Water.org, which has the goal of providing clean water to everyone who needs it. You can donate at the NSConference 7 fundraiser page.

Clink.

  1. It should be noted that my talk wasn’t really scripted so I’m recounting what I said from memory. When the video is released it’ll likely prove that I’m misremembering my exact wording. The gist will be the same, though.


March 10th, 2015

Secret Diary of a Side Project: The Refactor From Hell


Why I need a designer: Exhibit A.

THIS BUTTON.

This innocuous little button cost me a week. Let that settle in. A week.

It’s a simple enough premise — when the user gets presented a dialog like this, you should give them a way out. Presenting a button-less dialog is all kinds of scary — what if the camera crashes and doesn’t give the expected response, or any response at all? Sure, I can guard against that, but still.

So, it’s settled! I’ll implement a Cancel button so the user can back out of pairing with their camera. What a completely logical and easy thing to do.

PROGRAMMING!

Here’s the problem I faced:

Typically, when you connect to a camera you send it a message to initialise a session, then wait for a success response. This normally takes a small number of milliseconds, but when the camera is in pairing mode it won’t respond at all until the user has gone through a few steps on the camera’s screen.

All we need to do is sever the connection to the camera while we’re waiting, right? Easy enough. However, the architecture of my application has it working with the camera in a synchronous manner, writing a message then blocking until a response is received. All this is happening on a background thread so it doesn’t interfere with the UI, and since the camera has a strict request-response pattern, it works well enough. However, in this case, I can’t sever the connection on the camera’s thread because it’s completely blocked waiting for a response. If I try to do this from a separate thread, I end up with all sorts of nasty state — dangling sockets and leaked objects.

The solution to this sounds simple — instead of doing blocking reads, I should schedule my sockets in a runloop and use event-based processing to react when responses are received. That way, nothing will ever be blocked and I can sever the connection cleanly at any point without leaving dangling sockets around.

Easy!


Seven hours later I’m sitting at my desk with my head in my hands, wishing I’d never bothered. It’s 11pm, and later my wife tells me she’d approached me to come play video games but decided I looked so grumpy I’d be best left alone. I have no idea why it’s not working. I’m sending the exact same bytes as I was before, and getting the same responses. It actually works fine until traffic picks up — as soon as you start to send a lot of messages, random ones never get a response.

Well after midnight, I throw in the towel. I’d been working at this one “little” problem nonstop for eight hours, my code was a huge mess and I almost threw away the lot.

“I’m such an idiot,” I told my wife as I got into bed, “I even wrote about this on my blog, using the exact code I’m working on as an example”.

Yup, this is that old but reliable code I wrote about a couple of months ago. The class I said I’d love to refactor but shouldn’t because it worked fine.

One way of proving a hypothesis, I suppose.

As I was drifting off to sleep, I had an idea. I decided it could wait until the morning.


I slumped down into my chair the next morning and remembered my idea. Twenty minutes later, it was working like a charm1.

Sigh.

So, now it’s working and a darn sight better looking than my old code. However, the two years’ worth of confidence and proven reliability that I had with the old code has vanished — it seems to work, yes, but how can I be sure? Maybe there’s bugs in there that haven’t shown themselves yet.

If You Don’t Have Experience, You Need Data

I’ve been writing unit tests here and there for parts of my app where it makes sense.

“Business logic” code for the app is simple enough to test — instantiate instances of the relevant classes and go to town:

CBLShutterSpeed *speed = [[CBLShutterSpeed alloc] initWithStopsFromASecond:0.0];
XCTAssert(speed.upperFractionalValue == 1, @"Failed!");
XCTAssert(speed.lowerFractionalValue == 1, @"Failed!");

CBLShutterSpeed *newSpeed = [speed shutterSpeedByAddingStops:-1];
XCTAssert(newSpeed.upperFractionalValue == 1, @"Failed!");
XCTAssert(newSpeed.lowerFractionalValue == 2, @"Failed!");

Parsing data given back to us by the camera into objects is a little bit more involved, but not much. To achieve this, I save the data packets to disk, embed them in the test bundle and load them at test time. Since we’re testing the parsing code and not that the camera gives back correct information, I think this is an acceptable approach.

-(void)test70DLiveViewAFRectParsing {
    NSData *rectData = [NSData dataWithContentsOfFile:[self pathForTestResource:[@"70D-LiveViewAFRects-1.1.1.dat"]];
    XCTAssertNotNil(rectData, @"afRect data is nil - possible integrity problem with test bundle");

    NSArray *afAreas = [DKEOSCameraLiveViewAFArea liveViewAFAreasWithPayload:rectData];
    XCTAssertNotNil(afAreas, @"afRects parsing failed");

    XCTAssertEqual(31, afAreas.count, @"Should have 31 AF areas, got %@", @(afAreas.count));

    for (DKEOSCameraLiveViewAFArea *area in afAreas) {
        XCTAssertTrue(area.active, @"Area should be active");
        XCTAssertFalse(area.focused, @"Area should not be focused");
    }
}

Alright, so, how do we go about testing my newly refactored code? It poses a little bit of a unique problem, in that my work with this camera is entirely based on clean-room reverse engineering — I don’t have access to any source code or documentation on how this thing is supposed to work. This means that I can’t compile the camera’s code for another platform (say, Mac OS) and host it locally. Additionally, the thing I’m testing isn’t “state” per se — I want to test that the transport itself is stable and reliable, that my messages get to the camera and its responses get back to me.

This leads to a single conclusion: To test my new code, I need to involve a physical, real-life camera.

Oh, boy.


Unit testing best practices dictate that:

  • State isn’t transferred between individual tests.
  • Tests can execute in any order.
  • Each test should only test one thing.

The tests I ended up writing fail all of these practices. Really, they should all be squished into one test, but a single test that’s 350 lines long is a bit ungainly. So, we abuse XCTest to execute the tests in order.

First, we test that we can discover a camera on the network:

-(void)test_001_cameraDiscovery {
    XCTestExpectation *foundCamera = [self expectationWithDescription:@"found camera"];

    void (^observer)(NSArray *) = ^(NSArray *cameras) {
        XCTAssertTrue(cameras.count > 0);
        _camera = cameras.firstObject;
        [foundCamera fulfill];
    };

    [[DKEOSCameraDiscovery sharedInstance] addDevicesChangedObserver:observer];

    [self waitForExpectationsWithTimeout:30.0 handler:^(NSError *error) {
        [[DKEOSCameraDiscovery sharedInstance] removeDevicesChangedObserver:observer];
    }];
}

…then, we make sure we can connect to the found camera:

-(void)test_002_cameraConnect {
    XCTAssertNotNil(self.camera, @"Need a camera to connect to");
    XCTestExpectation *connectedToCamera = [self expectationWithDescription:@"connected to camera"];

    [self.camera connectToDevice:^(NSError *error) {
        XCTAssertNil(error, @"Error when connecting to camera: %@", error);
        [connectedToCamera fulfill];
    } userInterventionCallback:^(BOOL shouldDisplayUserInterventionDialog, dispatch_block_t cancelConnectionBlock) {
        XCTAssertTrue(false, @"Can't test a camera in pairing mode");
    }];

    [self waitForExpectationsWithTimeout:30.0 handler:nil];
}

(I’m a particular fan of that XCTAssertTrue(false, … line in there.)

Next, because we’re talking to a real-life camera, we need to make sure its physical properties (i.e., ones we can’t change in software) are correct for testing:

-(void)test_003_cameraState {
    XCTAssertNotNil(self.camera, @"Need a camera to connect to");
    XCTAssertTrue(self.camera.connected, @"Camera should be connected");

    XCTAssertEqual([[self.camera valueForProperty:EOSPropertyCodeAutoExposureMode] intValue], EOSAEModeManual,
                   @"Camera should be in manual mode for testing.");

    XCTAssertEqual([[self.camera valueForProperty:EOSPropertyCodeLensStatus] intValue], EOSLensStatusLensAvailable,
                   @"Camera should have an attached lens for testing");

    DKEOSFileStorage *storage = self.camera.storageDevices.firstObject;
    XCTAssertTrue(storage.capacity > 0, @"Camera should have an SD card inserted for testing.");
    XCTAssertTrue(storage.availableSpace > 100 * 1024 * 1024, @"Camera storage should have at least 100Mb available for testing.");
}

Once the camera is connected and verified to be in an agreeable state, we can start testing.

  • In order to test against the case of large amounts of traffic causing dropouts that drove me to insanity that night, I run through every single valid value for all of the exposure settings (ISO, aperture, shutter speed) as fast as I possibly can.

  • To test event processing works correctly, I test that streaming images from the camera’s viewfinder works.

  • To test filesystem access, I iterate through the camera’s filesystem.

  • To test commands, I take a photo.

  • To test that large transfers work, I download the photo the previous test took - about 25Mb on this particular camera.

  • And finally, I test that disconnecting from the camera works cleanly.

As you can see, this is a pretty comprehensive set of tests — each one is meticulous about ensuring the responses are correct, that the sizes of the data packets received match the sizes reported by the camera, etc — they’re essentially an automated smoke test.

The next challenge is to get these to run without human intervention. I can’t just leave the camera on all the time — if it doesn’t receive a network connection within a minute or two of powering on it’ll error out and you need to restart the Wifi stack to connect again — something not possible without human intervention. Perhaps a software-controlled power switch would allow the tests to power on and off the camera at will. However, that’s a challenge for another day.

I TOLD YOU SO, DAMNIT

So. In an earlier post I talked about being restrained when you think about refactoring code, and my ordeal here is exactly why. At the beginning it looked simple enough to do, but I ended up losing way too much time and way too much sleep over it, and when it finally appeared to work I had no data on whether it was any good or not. If I’d gone through all of that with no good reason it would’ve been a complete waste of time and energy.

But! Thanks to all this work, you can now cancel out of camera pairing from your iOS device! It’s a disproportional amount of work for a single button, but that’s the way software development goes sometimes — no matter how obvious the next task might look, tomorrow’s just a mystery, and that’s okay. It’s what makes it fun!

Plus, I now have a decent set of smoke tests for communicating with a real-life camera, which is something I’ve been wanting for a long time — a nice little silver lining!

Epilogue

After implementing all this, I decided to have a look at how the camera’s official software approached this problem, UI-wise.

It looks like a floating panel, but it behaves like a modal dialog. There’s no way to cancel from the application at all and if you force quit it, the software ends up in a state where it thinks it isn’t paired and the camera thinks it is paired, and the two will flat-out not talk to one another.

The mobile app can’t possibly be this bad, I thought, and went to experiment. There’s no screenshot here because there is no UI in the iOS app to help with pairing at all — it just says “Connecting…” like normal and you need to figure out that you need to look at the camera on your own.

It’s like they don’t even care.


Next time on Secret Diary of a Side Project, we’ll talk about how to make the transition to working full-time on your side project at home in a healthy way, both mentally and physically.

  1. The problem, if you’re interested, is that the camera throws away any messages received while it’s processing a prior message. This was accidentally worked around in my old code by blocking while waiting for a response. The solution was to maintain a message queue and disallow a message to be sent until a response to the previous one has been received.


February 25th, 2015

Rebrand

Welcome to my new blog!

It’s like my old blog, but with a much lighter appearance and hopefully provides a nicer reading environment. It should also be faster, and much better on mobile. As well as nearly all of my old posts, I’ve added a spiffy new More About Me page with a succinct version of my life story, if you’re interested. I’ve also spruced up the My Apps and Archive pages.

I’ve tried my very best to make sure all the links from my old blog work with this new one, but if you spot anything amiss I’d appreciate you getting in touch with me on Twitter or emailing blog at this domain and letting me know.

I actually ended up going through an interesting journey while putting this together. To make sure that every post was formatted properly in the new engine, I read through every single one of my posts all the way back to 2004 — and let me tell you, ten years ago I was an idiot. I seriously considered removing all the posts I found embarassing, but in the end I decided that the journey is just as important as the destination, so they stayed. The only posts I removed where ones that were nothing but links to now-defunct websites.

Technical Details

My previous blog was generated by Octopress, which is a blogging product built on top of Jekyll. However, Octopress’ main selling point for newbies to this whole thing (i.e., me a few years ago) is also its biggest drawback — it’s a complete blogging platform out-of-the-box. This makes diving in and customising it extremely daunting, rather like being presented with a car, a spanner and being told to replace the clutch plate. I did manage to customise a couple of little things on my old site, but not much.

So, a couple of weeks ago I sat here, new theme in hand, ready to try to put it into Octopress. It was soon apparent that I’d basically have to rip the entire thing apart to fully understand what was going on, and if I was going to do that, why not look at alternatives?

I’d recently tried out another static site compiler called nanoc for another project of mine, and really liked it. Where Octopress provides a fully featured blog out-of-the-box, nanoc provides nothing. The default site is literally a white “Hello World” page with no CSS at all. While this is daunting at first, it’s actually quite liberating — it took me about a week to put this whole thing together from scratch, and I now know every intimate detail about it which makes me really comfortable customising it in any way I need.

How The Site Is Put Together

  • There are three “things” in this entire site:

    1. Posts. These are markdown files.

    2. Pages. These are HTML fragments.

    3. Special items like the RSS feed.

  • Posts are put through a markdown parser (kramdown) then wrapped with the site’s template.

  • Pages are rendered pretty much as-is with nothing special going on other than being wrapped in the template. These include the About, Apps, and Archive pages, as well as the site’s home page.

  • When the template is rendered, pages containing the in_menu tag are placed in the site menu. This allows me to have “hidden” pages (like the 404 page) without any extra work.

  • Binary files (images and the like) live in a submodule to the blog’s source repo. Yes, git isn’t great at binaries (and there’s over 300Mb of them for this site), but it works alright for my needs. These files get copied to the output directory with no processing at all.

I’m really pleased with the results of my work, and it gives me greater control over my presence on the web. Over time, I hope to add more features to the site as I work on my web skills.


February 13th, 2015

Secret Diary of a Side Project: Getting To 1.0

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.

In this post, I’m going to talk about something that strikes fear into the heart of any programmer: planning. You won’t get to 1.0 without it!


If you’re anything like me, it’s likely that you have some form of issue tracker for your side project, detailing various bugs to be fixed and features to be added. In my instance, that ended up being a sort of rolling affair — I’d fix a bunch of things, see that my issue list was diminishing, then spend a while with the app prodding around until I found more things to add to the tracker. This was a perfectly acceptable approach in the beginning.

However, shortly after I committed to do this full-time, I realised I had no longer-term plan. So, I sat down and decided that I’d try to release 1.0 relatively soon after going full-time, allowing plenty of time to gain feedback from real photographers. You see, I have tons of feature ideas but until photographers tell me what they think, I don’t really have any data to tell me if these ideas are any good. Releasing a 1.0 early allows the app to be shaped by its users, rather than my idea of what I think users want.

This is the result, based on nothing more than a loosey-goosey feeling of the state of the project so far:

Milestone Date
Start collecting beta invites 2015-03-10
First beta release 2015-03-24 → 28
Post-beta questionnaire 2015-04-28
1.0 App Store submit 2015-05-05

Of course, I’ll be amazed if those deadlines stick. Still! It’s great to have something to aim for. I felt much better about myself.

…for a while.

A few days later I looked at those dates and started to feel a bit of dread. That March 10th date is when I really commit to releasing something – it’s when my marketing starts! I had no idea if I’d be able to do it or not. Eventually I realised the problem — the tasks in my issue tracker didn’t connect my project from where it is now to that 1.0 on May 5th.

It’s time to do some serious planning!

Shhh… Don’t Say “Agile”

I have a love-hate relationship with Agile. My first exposure to it was when I started at Spotify in early 2011. The company was very small at the time, and we were using… scrum, I think? I forget. Anyway, as the company grew the thing we were using turned out not to work so well. So, we tried a new thing. Then another new thing. Then the first new thing again but with a slight modification. Eventually, I flat-out stopped caring. “Just tell me how you want me to stick the notes on the wall, and I’ll be fine”, I’d say.

Fast-forward a few years, and a fellow named Jonathan joined the company. He’d written a book on Agile and handed out some copies. I took one with moderate-at-best enthusiasm, which then sat on my desk gathering dust. A few weeks later, he did a talk on a thing he called the “Inception Deck”, a method of planning out your product at its inception stages.

“This is perfect for Cascable!” I thought, and started furiously scribbling notes. After his talk, I told him I thought it was great. “Oh, really? I’m happy you think so — it’s all from my book though.”

At that point, I returned my copy of his book and bought an eBook of it instead, partly because I feel uncomfortable furthering my own app on something my employer paid for, but mainly because I like supporting good work.

I feel really uncomfortable plugging things on this blog — it’s not what it’s for. However, Jonathan’s book has immensely helped me as an independent developer trying to get an app out into the world, and a good deal of this post is inspired by things I learned from it. It’s called The Agile Samurai: How Agile Masters Deliver Great Software, and you can find it here at the Pragmatic Bookshelf.

Step One: Figure Out What You Want To Sell

If you were planning your app from the beginning, you’d start by planning what you want your 1.0 to actually be. A side project is completely the opposite of that — you just create a new project and go, plucking ideas out of your head and going with them.

However, that isn’t sustainable if you want to ship a quality product, no matter how much you claim to “live in the code”. At some point you’re going to have to stop and figure this stuff out, which can be pretty daunting if you’re just chugging along in your code editor.

The “Inception Deck” I spoke about earlier really helped me with this. I won’t go into it in detail — it’s in the book I mentioned above as well as on the author’s blog – but it’s basically a set of small tasks you can do to really help kick a project off in the right direction.

Now, I’m not kicking off a project at all, and some of the items in the Inception Deck are geared a bit towards teams working on one project rather than the lone developer, but still — if some of the tasks help bring clarity to my project, I’m all for it!

Alright, it’s time to jump out of development and pretend I’m doing this properly by doing the planning at the beginning. I cherry-picked the most relevant tasks from the Inception Deck, and here’s what I came up with, more or less copy and pasted from my Evernote document:

The Inception Deck for Cascable 1.0

Why Are We Here?

This task helps establish why this project exists to start with.

The applications that come with WiFi-enabled cameras tend to be pretty terrible. We can do better, and make a WiFi connection an indispensable tool on a camera rather than a toy.


Elevator Pitch

This is a fairly standard thing in the software world these days. Describe the product in 30 seconds.

For photographers who need intelligent access to their camera and photos in the field, Cascable is an iOS app that connects to the camera over WiFi and opens up a world of possibilities. Unlike current apps, Cascable will develop and evolve to become an easy-to-use and indispensable tool for amateur and professional photographers alike.


The Not List

This is one is new to me and was incredibly helpful. Defining what isn’t in scope for 1.0 can be as useful as defining what is.

In Scope for 1.0 — Things that will definitely make it.

  • Remote control of the basics: exposure control, focus and shutter.
  • Useful overlays for the above. Thirds grid, histogram, AE mode, AF info.
  • Calculating exposure settings for ND filters and astrophotography.
  • Saving calculation presets.
  • Viewing photos on the camera in the list.
  • Downloading photos to the device.
  • Viewing downloaded photos fullscreen, deleting downloaded photos.
  • Sharing downloaded photos and opening them in another app.
  • Apple Watch widget for triggering the shutter.

Not In Scope for 1.0 — Things that definitely won’t make it.

  • Cameras that aren’t Canon EOS cameras.
  • Cloud functionality.
  • Automatic downloading.
  • Support for videos in Files mode.

Unresolved — Things I’m not sure about.

  • Second screen mode for AppleTV, etc.
  • Applying Calculations Mode results to the camera.

What Keeps Me Up At Night

This exercise was also new to me. What things should you worry about, and which of those are beyond your control?

  • Not having dedicated QA.
  • Keeping “on the rails” and getting everything done properly and on time.
  • App Store rejection.
  • Canon getting uppity.

The first two of those are things I know can fix myself already. The fear of App Store rejection is pretty much life as normal for iOS development, so there’s no real need to worry about that as long as I’m familiar with Apple’s guidelines and don’t bump into the edges of the (admittedly sometimes vague) rules. That last one is more nuanced, and something I need to get legal advice about. That is where I should concentrate my energy into gaining knowledge.

Conclusion

So, what’s the benefit of writing all this down? Well, I’ve understood what this project is about the whole time, but succinctly describing it to someone else is a bit of a challenge. Not having answers to questions like “Will you support X camera?” or “Can I work with video?” was a bit embarrassing. Now, I can answer “Not at 1.0, no.” with confidence. Sure, I don’t need to answer to anyone else while making my own app, but being able to answer questions to others with confidence does great things for your own internal confidence, too.

Step Two: Fill The Gap Between Now And Then

Alright, so I’ve got an issue tracker full of tasks and a ship date. I also have a general overview of what Cascable 1.0 should be with the Inception Deck. However, I still haven’t brought all this together to form a set of directions to take me from where I currently am on the project to where I want to be for 1.0.

The problem is, as the lone developer of an app, I’m just in too deep. I can’t see the wood for the trees, and various other clichéd sayings about not having a clear view of the whole situation. I came up with all that stuff above completely on my own. How do I know if it’s any good, or just pure garbage?

What I need is an outsider.


Don’t be fooled, she packs a mean punch.

Meet Alana (that’s “Ah-lay-na”, not “Ah-lar-na”), who has agreed to be Cascable’s Product Owner while I get to 1.0. She’s also my wife, so I suppose she’s also the Product Owner of, well, me. She’s agreed to have meetings with me once every two weeks, splitting my journey into Agile-like sprints. I’ll get to explain why I didn’t meet any missed targets and why, which targets I did meet and what targets I plan to meet in the next two weeks.

However, we’re getting ahead of ourselves — my current problem is that even though I have a nice Inception Deck I don’t know exactly what 1.0 should be, never mind how to get to it. Alana also had a concern: “How can I be your Product Owner if I don’t know what the product is?”

It turns out that my problem and her concern can be solved in one step. The reason my issue tracker doesn’t connect between the current state of the project and 1.0 is because I just picked ideas out of my head when I ran out of tickets in my issue tracker. The Inception Deck helps, but it’s still a bit wishy-washy — I need a well thought-out master list of stories to work against. A good way to have Alana know the product? Have her make the list!

Business Time

One Saturday, we sat opposite one another at the dining room table with a pile of Post-It notes, some pens and a camera.

“Alright, “ I said, “You’ve just bought a camera and have realised how crappy the supplied app is. You’re going to hire me to write you an app that enhances your photography experience. I want you to tell me what it should do, and we’ll write each thing down on a note.”

She picked up her camera, prodded at it a bit then said “Erm… I guess it should connect to camera, right?”

Great! Our first story — but this was the very first page, not where the storyline ends. We spent the next hour talking about photography and she made feature suggestions along the way, mainly based on her previous photography experiences. I didn’t make a single contribution to the notes, other than to ask “Why do you want the app to do that?” to make sure that information got written down. Each idea got a note, and after an hour we had a fairly sizeable pile.

After we were done, I quickly added some more notes that contained features I’d already written but she didn’t independently come up with, then started the second half of the exercise:

“Now, I want you to put these all in a line in order of importance to you.”

Again, I didn’t interrupt other than to help when she wasn’t sure. “Should this go higher than Delete photos from the camera or lower?”

This is what we ended up with:

For the first time, I sat back and actually studied the notes. I was floored. In front of me was a complete journey to 1.0 and beyond. Features I hadn’t even thought of were high up the list, and of course they were — they were so stupidly obvious. Conversely, features I’d spent a fair amount of time working on (in particular, a “Night Mode” for the app) were right down towards the bottom, probably past the cutoff point for 1.0, and looking at the list I completely agreed with it being down there. In fact, I couldn’t really argue with the order of the notes at all once I heard the reasoning behind Alana’s chosen position.

I’ve been working on this thing for well over a year and a half now, and two hours with someone with fresh eyes completely changed the project and set it off on the journey to 1.0 with a flying start.

What’s better, every single outstanding bug or feature in my issue tracker fit into one of these Post-It stories perfectly. The app doesn’t handle a camera in pairing mode quite right? Well, that goes in the “Connect to camera” story. Oh, crap — that’s the most important story of them all, I should fix that right away!

Step Three: There’s No Step Three!

This is an absolute lie. Step three is the hardest one of all. Now you have a spiffy plan, you have to execute it.

My project isn’t a “side project” any more. Far from it — it has deadlines, a prioritised story list, and a product owner. Between the start of this post and now, it’s transformed into a fully-fledged software project, and I’m letting it down by only working on it in my spare time. Four weeks from today, however, that’s all going to change!


Next time on Secret Diary of a Side Project, we’ll swing back to some coding and talk about what happens when you ignore my advice and decide to refactor a piece of code that really doesn’t need it.


February 8th, 2015

Stripping Unwanted Architectures From Dynamic Libraries In Xcode

Since iOS 8 was announced, developers have been able to take advantage of the benefits of dynamic libraries for iOS development.

For general development, it’s wonderful to have a single dynamic library for all needed architectures so you can run on all your devices and the iOS Simulator without changing a thing.

In my project and its various extensions, I use Reactive Cocoa and have it in my project as a precompiled dynamic library with i386 and x86_64 slices for the Simulator, and armv7 and arm64 for devices.

However, there’s one drawback to this approach - because they’re linked at runtime, when a dynamic library is compiled separately to the app it ends up in, it’s impossible to tell which architectures will actually be needed. Therefore, Xcode will just copy in the whole thing into your application bundle at compile time. Other than the wasted disk space, there’s no real drawback to this in theory. In practice, however, iTunes Connect doesn’t like us adding unused binary slices:

So, how do we work around this?

  • We could use static libraries instead. However, with multiple targets and extensions in my project, it seems silly to bloat all my executables with copies of the same libraries.

  • We could compile the library from source each time, generating a new dynamic library with only the needed architectures for each build. A couple of things bother me about this - first, it seems wasteful to recompile all this non-changing code all the time, and the second is that I like to keep my dependencies static, and making new builds each time means I’m not necessarily running stable code any more, particularly if I start mucking around in Xcode betas. What if a compiler change causes odd bugs in the library? It’s a very rare thing to happen, but it does happen, and I don’t know the library’s codebase well enough to debug it.

  • If we don’t have the source to start with, well, we’re kinda out of luck.

  • We could figure out how to deal with this at build-time, then never have to think about it again. This sounds more like it!

Those Who Can, Do. Those Who Can’t, Write Shell Scripts

Today, I whipped up a little build-time script to deal with this so I never have to care about it again.

In my project folder:

$ lipo -info Vendor/RAC/ReactiveCocoa.framework/ReactiveCocoa

→ Architectures in the fat file: ReactiveCocoa are:
    i386 x86_64 armv7 arm64

After pushing “build”:

$ lipo -info Cascable.app/Frameworks/ReactiveCocoa.framework/ReactiveCocoa

→ Architectures in the fat file: ReactiveCocoa are:
    armv7 arm64

Without further ado, here’s the script. Add a Run Script step to your build steps, put it after your step to embed frameworks, set it to use /bin/sh and enter the following script:

APP_PATH="${TARGET_BUILD_DIR}/${WRAPPER_NAME}"

# This script loops through the frameworks embedded in the application and
# removes unused architectures.
find "$APP_PATH" -name '*.framework' -type d | while read -r FRAMEWORK
do
    FRAMEWORK_EXECUTABLE_NAME=$(defaults read "$FRAMEWORK/Info.plist" CFBundleExecutable)
    FRAMEWORK_EXECUTABLE_PATH="$FRAMEWORK/$FRAMEWORK_EXECUTABLE_NAME"
    echo "Executable is $FRAMEWORK_EXECUTABLE_PATH"

    EXTRACTED_ARCHS=()

    for ARCH in $ARCHS
    do
        echo "Extracting $ARCH from $FRAMEWORK_EXECUTABLE_NAME"
        lipo -extract "$ARCH" "$FRAMEWORK_EXECUTABLE_PATH" -o "$FRAMEWORK_EXECUTABLE_PATH-$ARCH"
        EXTRACTED_ARCHS+=("$FRAMEWORK_EXECUTABLE_PATH-$ARCH")
    done

    echo "Merging extracted architectures: ${ARCHS}"
    lipo -o "$FRAMEWORK_EXECUTABLE_PATH-merged" -create "${EXTRACTED_ARCHS[@]}"
    rm "${EXTRACTED_ARCHS[@]}"

    echo "Replacing original executable with thinned version"
    rm "$FRAMEWORK_EXECUTABLE_PATH"
    mv "$FRAMEWORK_EXECUTABLE_PATH-merged" "$FRAMEWORK_EXECUTABLE_PATH"

done

The script will look through your built application’s Frameworks folder and make sure only the architectures you’re building for are present in each Framework.

Much better! Now I can throw fat dynamic libraries at my project that contain all the architectures I’ll ever need, and my build process will deal with which architectures are appropriate at any given moment.


February 4th, 2015

Secret Diary of a Side Project: Cold, Hard Cash

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.


Moolah. Cheddar. Bank. Cash. Benjamins. There are so many slang terms for money it’s hard to keep track. It’s not surprising, really — people typically dislike talking about money, and it’s human nature to try and make light of something that’s not even slightly fun. It’s the same reason people make jokes at funerals, I suppose.

Money puts a roof over your head, food on the table and is a required thing to function in modern society (yes, Bitcoin still counts as money - there’s always that one guy, isn’t there?) successfully. A fool and his money are easily parted, the saying goes. Nobody wants to be a fool.

However, if you’re serious about your project you’re going to have to spend money on it at some point. If you don’t, you’re going to damage it in ways you might not be able to see until much later.

It’s very difficult — perhaps even nonsensical — to spend nontrivial amounts of money on something that’s solely a side project. However, now we’re getting serious about this thing it’s time for that to change — something that can still be really difficult to do even though we’ve shifted our perception of this project in other areas. It really doesn’t help that money tends to have a tendency to suck the fun out of things, either. It cuts fun stuff out of holidays, limits the size of your TV, and gets in the way of that Ferrari that should be on your drive. Spending money on software I’m perfectly capable of writing on my own? Where’s the fun in that?


This post is really text-heavy, so please enjoy this photo of the Northern Lights my wife took while you take a breather.

Invest In Yourself

Time to be blunt: Without a bit of money put into it, your project will never be as good as it could be. If you refuse to put money into it at all, you should really stop now, or at least reverse course and keep your project as a side project. It’s a harsh thing for me to say, but I really believe it.

Trust yourself.

If you’re there “in theory” but are still struggling with it, try my patented FREE MONEY technique.

NOTE: I should probably point out that I’m not a financial advisor, and you really shouldn’t listen to me. This will become apparent in the next paragraph, but still — I’m an idiot. Go talk to someone smart when deciding what to do with your own money. Like I said, money is serious business.

You know when you find a bit of money in your coat you completely forgot was there? Your immediate reaction is “Sweet, free money!”, despite the fact that it was your money to begin with and you’re exactly back where you were before you lost it. You can just take that mental effect to an extreme — open a new savings account called “App Fund”, choose a sensible amount of money to put aside and transfer it over. For bonus points, make an automated monthly/weekly transfer in there. Your money hasn’t gone anywhere, it’s still safe and sound.

Then, don’t look at it for a while. Go do other things. A month or two later, when you’ve survived without that money completely without incident, come back to it. Sweet, free money!

Now you’ve followed my terrible advice (or hopefully not) and put some money aside for your project, how can it actually benefit your project?

Hire Someone Better Than You

Universal Rule #1: Time is money [citation needed], particularly if time is constrained. If you can hire someone to do a way better job as you in a fraction of the time, you should seriously consider it.

Me? I’m terrible at designing stuff. I can throw together some lines that look sort of like an arrow, but that’s it.

A few months ago, the core functionality of the app was settling down and it was time to start working on the UI properly, particularly the part of the app that dealt with photos.

My photo view was a simple grid. When you had a camera connected, it showed you the photos on the camera. When you didn’t, it showed only the ones you’d downloaded. Seems simple enough! However, in practice, it was horrible.

On the left, you see some photos you’ve downloaded. Then you switch on your camera and a couple of seconds later the view changes to the one on the right. Where’d my photos go? Well, I can see the speedboat towards the bottom there, but what about the one of the house?

I banged my head against this for a little while before throwing in the towel and taking to Twitter for help:

I got a few replies and picked two of the most interesting-looking designers to take a look at my UI. One of the two I was impressed enough with that I extended his assignment a bit, and plan to use a lot more as this project goes full-time.

Anyway, a stupidly short amount of time later, I had some great mockups that made complete sense, and I felt a bit stupid for not thinking of it before. Still! I’d spent not a crazy amount and come out with a professional designer’s take on my problem, and the walls I was bumping against crumbled away and development picked up again.

It was money well-spent, in my opinion, particularly taking the “time is money” rule into account.

Buy Tools

Universal Rule #2: You can’t build good things without good tools.

Having a little bit of money set aside lets you invest in those tools that make your life a lot easier but are difficult to justify on projects that don’t have a dedicated budget.

In particular for me, the time saved by a single feature of Sketch — the ability to automatically export to @1x, @2x and @3x with a single click is well worth the $99 it costs. Before I had a budget it was a struggle to justify when I get Photoshop as part of my $10/month Adobe Creative Cloud subscription. With a budget? No brainer.

Buy Hardware

Universal Rule #3: Always test with real hardware.

Alright, this one isn’t really that universal. However, you really must work with real hardware when building software. Emulators are crappy.

Depending on your project, you might be able to get away with an iOS device or two and be done with it. It’s really important, in my opinion, to get hold of the slowest device you support and test on that as much as you possibly can. If you run your app day-to-day on the slow thing, you’ll kinda do performance optimisation as you go, which is much nicer than working with the latest and greatest for a year, then thinking “Oh, I should test on that old one” a week before release and finding it runs at 2fps!

For my project, though, I work with cameras. For a long time, I was just using the camera that started this whole thing, my Canon EOS 6D. It works perfectly well, but there’s a few problems:

  1. I can’t really say I support “Canon SLRs” if I’ve only tested it with this one model of camera.

  2. The constant testing and debugging cycles have really take a toll on the battery.

  3. This is my personal camera, and as such I’ve spent a great deal of time setting it up to my liking. However, for testing purposes I have to screw around with it all the time, which gets annoying fast when I just want to go outside and take photos. Even worse, if my tests fill it up with photos of the wall and mess with the settings, I might end up missing a great photo opportunity.

To solve these problems, I bought a different model of camera that’s dedicated to testing. This solves all of my problems — I can screw around with it all I like and not care, the mains adapter I bought with it means I don’t need to worry about the battery, and the fact it’s a different model means I can be more confident about compatibility.

However, you do have to kind of feel sorry for the poor thing. Some cameras get to take photos of beautiful models, some get to see the Northern Lights, others record a couple’s most treasured memories on honeymoon. This one is destined to have a life chained to a desk by a power cord, taking photos of the wall.

Once Cascable ships, I should take it to somewhere beautiful to celebrate.

My Expenses

Adding it all up, I’ve spent well over USD $1,500 on Cascable and I’m still only working on it part-time. Even with this in mind, it’s been absolutely worth it in my eyes — the benefits the tools and services I’ve bought have boosted the quality of my project no end.

If you’re interested, here’s a list of my current and near-future expenses.

Item Cost (USD)
Test Camera (Canon EOS 70D) $999
Canon Mains Power Adapter $119(!)
Design Services $430
JIRA License $25
Sketch License $99
iOS Developer Membership $99

As I approach launch, the costs will start to pile up:

Item Cost (USD)
Various Hosting Costs ~$40/month
SSL certificate $100-$500
Another Test Camera ~$600
Design Services ~$1,000 - $2,000

Not including workspace costs (which I’ll discuss in a future post) or the cost of my own time (time is money!), I’m going to be several thousand dollars in the hole by the time I launch.

Once upon a time, a few mistakes ago, this number would have scared me away big time. But! If I’m not willing to invest in myself, how can I expect customers to invest their money and time into me and my app?


Next time on Secret Diary of a Side Project, we’ll talk about taking your project and taking it from whatever state it’s in right now through to launch, through careful planning and a little help from an outsider.


January 25th, 2015

Secret Diary of a Side Project: Coding Practices

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.

In this post, I’m going to talk about some of the coding practices I’ve picked up over the years that really make a difference when working on projects that have a limited time budget.


There are tons of coding practices that help us be better, faster, more understandable as coders. However, although this post is pretty long, I only talk about two practices — both of which are focused on keeping projects simple to understand for people new to the project. That’s really important for a side project you intend on seeing through — because you’re working under severe time constraints, you may well go months between looking at a particular part of a project. You’ll be a newbie to your own code, and future you will love past you a lot more if past you writes a simple, easy-to-understand project.

Keep Dependencies Down, Keep Dependencies Static

Might as well get the most unpopular one out of the way first — I dislike third party dependencies, so I keep them to an absolute minimum. CocoaPods is a third party dependency to manage my third party dependencies, so I don’t use that at all.

My app has four third party dependencies, one of which isn’t included in release builds (CocoaLumberJack).

  • CocoaLumberjack
  • Flurry
  • MRProgress
  • Reactive Cocoa

The list itself isn’t important. What’s important that each item in it only got there after careful consideration of the benefits and drawbacks.

There’s a huge amount of discussion online about CocoaPods, and I’m going to ignore all of it — CocoaPods doesn’t really add much for the way I approach projects, so I don’t use it.

So, how does a dependency end up on that list?

  1. If I end up in a position where a third party library might seem useful, I figure out if I should rewrite it myself. After all, I don’t truly know how something works unless I wrote it, and if I want 100% confidence in my product, I should write it all (within reason).

  2. If I actually decide I want to use the library, I’ll find the latest stable release of it, add it to my project, and start using it.

  3. I never touch or update that dependency again unless I have a good reason to.

Point 3 in particular makes most of CocoaPod’s usefulness moot. A requirement for me to use a third party piece of code is that it’s mature and stable. If they’re updating the library frequently and I’m required to keep up with those updates to avoid problems, well, that library gets deleted and I find something else.

Using this approach, I can concentrate on making my app better rather than making sure the spiderweb of dependencies I’ve added don’t screw things up every time they get updated.

Model Code Goes In A Separate Framework Target

While preparing for this post I had a look back at my previous projects and it turns out I’ve been doing this since I started programming in Objective-C and Cocoa back in 2006, and I really love the approach.

Basically, if it doesn’t involve UI or application state, it goes in a separate framework. My iPod-scanning app Music Rescue contained a complete framework for reading an iPod’s database. My pet record-keeping app Clarus contained one to work with the application’s document format.

Even though my camera app isn’t ready yet, I have a stable, tested, fully-documented framework that cross-compiles for iOS and Mac OS X. That framework takes complete care of camera discovery, connections, messaging queues, and all that jazz.

It’s true that this actually adds more work for you, at least at the beginning. Isn’t this post supposed to be about making your life easier? Well, the long-term benefits far outweigh the extra work.

It Provides Separation Of Responsibilities

A huge benefit to this is code readability and separation of process. Suddenly, your application has a huge set of problems it just doesn’t have to care about any more. Sure, you need to care about them, but your application doesn’t. It makes the application lighter, easier to work with and that bit less complicated to understand.

It Encourages You To Future-Proof and Document APIs

This is an interesting one. Now your logic is in a completely separate target, suddenly it’s a product all of its own. It needs documentation. It needs a stable and thought-out API.


This code in the Mac Demo app hasn’t changed since 2013, even though camera discovery has been refactored at least twice in that time.

This pays dividends down the road if pulled off correctly. Designing APIs is hard — I’ve been designing public APIs for Spotify for a number of years now, so I’ve stumbled through all the terrible mistakes already. Some pointers for designing APIs that stand the test of time:

  • No non-standard patterns get exposed publicly. Sure, your task abstraction layer/KVO wrapper/functional programming constructs are amazing now, but in two years? You’ll regret exposing it publicly when you move to the new hotness. Plus, users shouldn’t need to learn your weird thing just to connect to a camera — even if that user is you in six months.

  • Document everything as you go. Header documentation is great in Xcode these days.


“How does this thing behave again?” Opt-Click “Aha!”

  • If you need to do background work, have the library completely deal with it. The client application shouldn’t have to care about it at all. A common pattern is to have public methods dispatch to a queue/thread privately managed by the library, with the aim of making the library somewhat thread-safe. If clients find themselves needing direct access to the private queue/thread, rethink your APIs so they don’t — it’s a pretty bad code smell. Always document what queue/thread callbacks come back on, or take a parameter to let the client tell you.

It Makes Quick Prototyping and Testing Crazy Easy

This is my favourite benefit of the multi-target approach, and where you really start to reel in the time savings. Making the core of the application compile for Mac OS X means I can prototype super easily.

I have a Mac target called Cascable Mac Demo. It’s a wonderful little debugging tool — it supports viewing all of the camera’s properties, taking a photo, browsing the file system and downloading files, and streaming the camera’s viewfinder image. Thanks to having a feature-complete library with a thought-out API, the entire application is less than 250 lines of code.

This little application makes building and testing new functionality a breeze. When launched, it’ll connect to the first camera it finds and sets up just enough state to allow me to add some code to test some new stuff as it’s being built.

This is a much better approach than adding some random code somewhere in the main iOS app to make sure new functionality is coming together properly, and makes sure my core functionality is mostly working and complete before it ever goes into the main app.

It Gives You Flexibility

What if I want to release a Mac version of my app one day? Well, the core functionality is already compiling, running and tested on Mac OS X. Hell, the Mac Demo app is more full-featured than some proof-of-concept apps I’ve seen!

If you want to be really flexible and are seriously considering multiple platforms, write your core framework in something cross-platform, like C# (or C++ if you hate yourself). The benefits of a constant, mature, tested library across all of your platforms will pay dividends.


Cascable in C#? Why not?


Next time on Secret Diary of a Side Project, we’ll talk about one of the most difficult things in this whole process: cold, hard, cash money.