January 3rd, 2016

Garmin VIRB XE Review Updated

Back in August 2015, I reviewed a new action camera on the market - the Garmin VIRB XE. I really liked it, and sold my GoPro cameras in favour of it. Since then, several software updates have come along, changing the experience quite a lot — particularly if you use the data recording and display features.

As such, I’ve updated my review to reflect what the camera is like in early 2016. Spoiler: It’s better!

 

You can find my full and updated review here.


December 6th, 2015

Sprucing Up Indoor Training with Simulated Power Data

The clocks have gone back and the nights are closing in. Here in Sweden, it’s already dark by 3:30pm!

The dark, more than the cold, severely dampens my enthusiasm for cycling in the evenings after work — the lovely path along the edge of the lake becomes a harrowing edge over a black nothingness.

So, it’s time to bring the evening rides indoors. I’m not a fan of regular exercise bikes – you have to spend silly money to get a decent one, and then you get some weird geometry. I already have a great bike that’s been perfectly set up over a period of time to provide the correct geometry for my body. Why can’t I use that?

Thankfully, there are stationary trainers that let you do just that. I have a Kurt Kinetic Rock and Roll Smart stationary trainer — it has a built-in Bluetooth power meter so I can manage workouts on my phone, and is built to allow side-to-side motion of the bike. Not only does this simulate real riding better, it allows the lateral forces I put through the bike to be absorbed by the spring in the trainer and not my rear wheel’s axle and rear triangle, putting to rest fears of stressing parts of the bike that don’t normally take those sort of forces.

Anyway! I’m all set up — this is gonna be just like riding outside!

…Oh.

Well, that’s boring. Why don’t I record a video of my ride to play back while I’m training? And if I’m doing that… it’d be great if I can overlay some data so I can match my pacing to the ride on the video. I use a Garmin VIRB XE camera, the software for which can import the data from my Garmin GPS to overlay my heart rate, speed, pedalling cadence and more over the video. This sounds perfect!

Unfortunately, this is where we hit a snag. The trainer I have has a “fluid” resistance unit, which ramps up resistance with speed — when I pedal fast in a high gear it’s difficult, and when I pedal slowly it’s easy. This sounds sensible enough until you realise that the hardest parts of my ride are up steep hills on off-road trails — I’m putting a ton of power down, but I’m travelling really quite slowly. This means that overlaying speed data onto my video is useless since the trainer is basically simulating a perfectly level road. What I need to overlay on my video is a readout of the actual power I’m putting out at any given moment.


I’m doing 5km/h here, but outputting nearly 300W. 5km/h on my trainer gives an almost negligible power output.

After a weekend of mucking around with several horrible looking programs, I finally managed to get a simulated-but-accurate-enough power figure into Garmin’s software, allowing me to overlay power output onto my video:

Now when riding indoors I can put my iPad and iPhone on a music stand (make sure you get a sturdy one!) and reproduce my outdoor ride by matching my live power output on the trainer to the one displayed in the video.

I love this method of training. It gives me something to look at while riding, and because it’s realtime from my ride, I get great pacing — it’s on local trails I know and ride frequently, and when I need rest stops, I’m already stopping to rest on the video.

Producing Simulated Power Data

So, how to we get that live power overlay?

The easiest option would be to buy an actual power meter for my bike. Most of them are designed for road bikes, and all of them are expensive — you’re looking at towards $1,000, which is a bit spendy for a project like this.

So, with that out, we need to simulate our power data. I use the popular site Strava to track my rides, and they provide a pretty decent-looking “simulated” power graph for each ride:

Annoyingly, though, there’s absolutely no way to get this data out of Strava in any meaningful way, so that’s out. Garmin’s similar service, Garmin Connect, doesn’t produce this data at all, so that’s out too.

Looks like we’re going to have to do this manually!

Ingredients

  • A video recording of a bike ride.
  • Some recorded telemetry data from that same ride, such as from a GPS unit.
  • GoldenCheetah, an open-source data management application.
  • Fitness Converter, a free application by yours truly for converting fitness files between formats.
  • Garmin VIRB Edit, a free video editor that can overlay data onto your video.

Method

First, we’re going to load our recorded telemetry data (heart rate, speed, pedalling cadence, etc) from the GPS into GoldenCheetah, a piece of software for working this this sort of thing. Once imported, clicking the “Ride” tab should show graphs of your data:

Note: On the first launch, GoldenCheetah will ask you to set up a profile. You need to enter an accurate weight for you and your bike to get accurate power data.

Next, choose “Estimate Power Values…” from the Edit menu. Once you complete the process, you’ll see more graphs added to your data, including a “Power” graph. If you have other data to compare to, such as Strava’s Simulated Power graph, you can compare them, and if GoldenCheetah’s data is significantly wrong you can choose “Adjust Power Values…” from the Edit to move it all up or down.

Finally, choose “Export…” from the Activity menu to export the file as a TCX file.

Unfortunately, we’re not quite there — Garmin’s software can’t import TCX files, so we need to convert our new file to the FIT format. The best pre-existing solution I could find for this was really quite terrible, so I ended up writing my own (as you do): Fitness Converter.

Once the data is in the FIT format, we can import it into VIRB Edit. Since the VIRB XE camera has GPS in it, it has the accuracy to automatically sync the data from my GPS unit (now with added power data!) perfectly. If you’re not in this position, you can manually sync your data file to the video.

…aaand, we’re done. You can now add your graphs and overlays as you wish using VIRB Edit. Since speed is completely irrelevant in this instance, I leave all that out and just have a single giant power bar — it’s easy to read when working out over a constantly changing number.

Happy training!


Next training spend: a bigger screen!


August 16th, 2015

Garmin VIRB XE for Automotive and Track Days: A First Impressions Review

Update January 2016: I’ve updated this review to reflect the camera and its software after a few months and a few software updates. Happily, it’s pretty much all positive. Parts of the review that are now incorrect are still here but are struck through so you can see what’s changed.

Note: For the first part of this review, I’m going to ramble on a bit about my history with this sort of thing and why I’m so hopeful that the VIRB XE isn’t crappy for use on track days. If you don’t care, you can scroll down a bit to get to the real review.

We were totally ahead of the times, man!

I’ve always loved cars and driving. As soon as I had a car more interesting than my Mum’s 1.2L Vauxhall Corsa (SXi!) I started going on track days. As my skills and enjoyment grew I wanted to record videos of my driving to show my friends and catalogue my improvement over time, so I started to record my track driving.

But! Without data, track driving videos are boring. Check out this recent one of mine — even if you’re a car nut, I bet you won’t make it through more than a lap or two before getting bored.

Back in 2007 I was bored of my dataless videos, and as part of my final year at university, I wrote a prototype Mac application to add graphical overlays to my track day videos. It was just a prototype, but it worked great and I was really proud of what I’d made — enough that it still gets a space in my abbreviated life history.

However, while the software was ready, the hardware for gathering the data just wasn’t there. iPhones and iPads were just beginning to arrive, and the other smartphone platforms at the time weren’t quite suitable. In particular, the Windows Mobile devices used at the time didn’t have accurate enough clocks to reliably time the data, warranting a whole section in my dissertation discussing interpolating timestamps.

In 2007, no camera came close to the tiny action cameras of today (particularly in the consumer space) so I ended up using a HDV camcorder strapped into the car.

For recording data from the car I used a reasonably high-end (in the consumer space) OBD to Serial dongle that was advertised as being “high speed”. It read data from the CAN bus of my car at roughly 5Hz, which meant if you wanted to record multiple properties at once, you rapidly lost nuance in your data.

Since there was nothing like the iPad back then, I ended up using a tablet PC designed for outdoor use - it had a digital pen for input, and a special display that was readable outdoors and terrible everywhere else. This thing ran full-blown Windows XP and cost a fortune.

I had well over £3,000/$4,500 worth of big, heavy equipment. Here’s an example of what all that would get you when combined with my prototype software:

 

Perfectly acceptable (despite the hilariously slow data acquisition rate), but I ended up abandoning the project. Strapping all that stuff into your car was just not fun, and the marshals at most track days I went to weren’t desperately happy with the thought of that amount of stuff flying around the car if I crashed. Compare the photos above with my equipment list below and you’ll see just how far we’ve come!

VIRB XE: The Review

This review focuses on the experience the VIRB XE gives when using it to create driving videos, typically on a track day or on a road trip. As well as the camera itself, I’ll be using it with the following equipment:

  • An OBDLink LX — a Bluetooth OBD dongle for interfacing with the car.
  • A Raceseng Tug View — a tow hook with an integrated GoPro mount.
  • An Audio-Technica ATR3350 microphone and Zoom H1 audio recorder.


The camera is attached to the front of my car (along with a lot of bugs!) using the Tug View.

A Note On Audio

Garmin claims their microphone “…records clean and clear audio that cameras in cases just can’t pick up”, which is an implied bash at GoPro, I suppose. While that may be true, the interesting noises from a car come from under the bonnet or out the back, neither of which are interesting places for a camera. Therefore, this review won’t deal with sound quality.

That said, my video explaining how to get good sound quality from your car on a track day does use the VIRB XE for the clips at the end, so if you’re an expert on what wind noise should sound like, go nuts!

 

A Note On Video Quality

I’m not going to directly compare video quality to other cameras either — I don’t have the skill set to do a good job of it. The video quality seems great, though, and the camera does an admirable job in difficult autoexposure situations, like driving through a shady forest on a sunny day.

Pre… Impressions…?

Garmin, I’m going to level with you: paper launches suck. This camera was announced in April and I was super excited about it, thrusting cash at my computer screen with the enthusiasm of a kid in a candy store. And then you said “summer”, and my enthusiasm waned. I went to a track day in August (firmly in “summer”) and the camera still wasn’t available. “Garmin suck!” I found myself saying to my friend, grumpy that I was still waiting for the camera.

That’s a pretty negative feeling to come back from.

First Impressions

This review is going to compare to the GoPro a lot. They’re the de-facto standard in this space, and I’ve been using them for years. They have a huge amount of momentum, but I’ve actually been falling out of love with them for a little while. They’ve always been a bit fiddly, but silly design decisions like that stupid port cover and a flimsy USB connector that’s soldered (poorly, in one of mine) to the mainboard make it feel fragile, which is exactly the opposite of what you want in an outdoor action camera.

Within seconds of pulling the VIRB XE out of its box, you realise it’s different. After a couple of minutes, you get the feeling that it’s been designed with care for its intended environment — dropping off my bike into a muddy puddle.

The whole thing is really well put together. A few particular details stand out for me:


Easy to push buttons and the big chunky “record” switch and great to use with gloves on.


The screen is lovely and clear compared to that of the GoPro.


A little tray holds inserts that absorb moisture to prevent the camera from fogging. The inserts are reusable and four are included in the box (one of which I promptly lost because they’re small and I’m stupid).


All electronic interfacing is done using this external set of pins. No female ports means no ports have load-bearing flimsy soldering, no holes for water to get in, and no stupid port cover.


Sensibly, they’ve accepted that GoPro currently rule the roost in the market and the camera is directly compatible with the GoPro ecosystem of mounts.

However! It’s not all perfect.

A very minor niggle is that the “Menu” button on mine feels a bit weird. You feel it click when you push it, but nothing happens. You need to push a tiny bit harder to get the button to register.

A much less minor niggle is the cable connecting mechanism. The cable snaps on using a very rugged connector (which is great), but when I pick the camera up it disconnects as if I’d unplugged it. I can repeat this with 100% repeatability with my camera and cable, which is quite worrying. Randomly disconnecting is a great way to corrupt the filesystem. Sure, I can work around that by taking the SD card out and using a card reader, but what happens if my dog bumps my desk during a firmware update?

Hopefully, this is just a niggle with my particular camera. I’ll contact Garmin about it and update this review with their reply.

Update January 2016: The weird menu button isn’t unique to my camera. There are theories on the Garmin forums that it’s actually a half-full button like the shutter button on a camera, and there’s nothing yet assigned to a half press. Garmin’s response was that the camera was acting as normal. I haven’t actually used the cable again since this review, and I haven’t pursued it further.

Recording a Car Video

During setup, the camera created a WiFi network and paired with my iPhone perfectly, and the camera allows you to customise its SSID and password on-screen.

Next, I connected it to my OBDLink LX. It took a few clicks of the “Scan” option in the VIRB’s Bluetooth settings before it saw my OBD dongle, but once it found it the two paired instantly. While the camera was adamant it was connected to my car, the VIRB App on my iPhone reported “No connected sensors”. Thankfully the camera was right, and the data from my car was recorded perfectly. Hopefully the glitch in the app will be fixed.

I attached the camera to the front mount on my car, started my audio recorder then used the VIRB app to start the camera from my iPhone. After a little beep of the horn (for syncing my separate audio recording with the video), I set off for a 25-minute drive around a local lake.

Once home, I was able to connect to the camera using my phone and stop recording. Everything appeared to have worked just fine.

Editing a Car Video

This is where I’m ready to be let down. I wrote the app I wanted (well, a prototype of it) eight years ago, and nothing has come close since. Like the bride who’s been planning her wedding since she was a small girl, reality can never quite match up to expectation. Nobody will write the app I want.

Data and Gauges

Expectations lowered, I fire up VIRB Edit for the first time and import the recording straight from the camera.

Holy crap. With zero effort I have a full set of data and a map synced to my drive. This is wonderful!

The quality of the recorded data by the VIRB seems great — the OBD data came out perfectly despite there being a couple of metres and an engine between the camera and the Bluetooth OBD adapter, and the application managed to handle the device losing a GPS fix for a few seconds with grace, resulting in a slightly funny-looking map (bottom left of the map in the screenshot above — the road isn’t that square) but no other problems.

However, the data is a bit too perfect, and the app seems too trusting of it. In particular, G-forces. With the camera directly bolted to my car’s chassis, the camera’s internal accelerometer seems to pick up every tiny little vibration, which VIRB Edit displays without filtering as this example from a perfectly smooth road shows:

Update January 2016: I’m happy to report that this problem has been completely fixed with firmware 3.70, released in early December 2015. I was concerned that the vibrations from being directly bolted to my car with a metal mount would be too much to overcome, but with the firmware update the G-force data from the VIRB is lovely and smooth, and picks up gentle curves and speed changes just fine. You can see a before (left) and after (right) comparison below:

 

It’d be nice if there was an option to have the application perform a low-pass filter on the data. This would reduce the responsiveness of the data slightly, but my 1,200kg car isn’t changing direction fast enough in any axis to make that a huge problem.

VIRB Edit comes with a number of templates which work great, and a lot of individual gauges that you can customise the colours of to create your own layouts and styles.

If that’s not enough, you can create your own gauges and edit them, which is a superb feature to have for power users — I plan to make gauges in VIRB Edit to match the ones in my car, and I bet others will do that same.

Video Editing

VIRB Edit is a basic, newbie-friendly video editing application, and the features it does have work well, although I did notice a little audio hiccup during playback when two sequential clips (the camera splits recordings into fifteen-minute chunks) are placed together.

There are a number of features I need to produce my track day videos that VIRB Edit doesn’t have:

  • The ability to import a separate audio track (from my audio recorder) and precisely sync it (and keep it synced) with the audio track of the video.
  • The ability to rotate the video slightly when I mount the camera slightly off-level.

Now, I’m not saying Garmin should implement all these features — that’d be silly given the number of video editors already out there at any price range you can mention. Normally, I’d just import my video into my editor of choice and edit to my heart’s content. However, the addition of data overlays makes that problematic — if I add my data overlays in VIRB Edit then export for further editing, a number of problems occur:

  • An extra layer of encoding has happened, reducing the quality of the video.
  • The gauges are baked into the video, meaning any rotations, colour corrections, etc will be applied to them as well.

I could go the other way — import the raw video into my editor of choice, apply corrections, merge in the better audio, etc, but you still end up with an extra encoding step that reduces quality.

Solving this is actually relatively easy, and my prototype application from years ago had this built-in: several video formats and containers support videos with alpha channels. What I’d love to do is add my data overlays in VIRB Edit then export a lossless video containing only the overlays on a transparent canvas. This way, I could import the original video and the overlays into my editor of choice and keep them in separate tracks, allowing me to apply rotations and colour corrections to the video to my heart’s content. Bonus points for being able to export each overlay separately, allowing the sweet animations seen in Garmin’s own VIRB XE promotional video!

Update January 2016: I’m not sure if I was just being dumb when I wrote this review, but I’ve recently found an option in VIRB Edit’s preferences: Export transparent PNG sequence for overlays. This does exactly what it says on the tin, and after exporting a video it’ll separately export a sequence of transparent PNGs containing only the overlays. Apple’s Motion editing software picked this sequence up directly with no further action needed on my part. The only minor downside to this is that you’ll have one PNG sequence containing every single overlay, which is less useful if you want to animate them independently. This can be worked around, though, by exporting multiple times with one overlay at a time. The minor downside to that approach, though, is that there’s no option to export only the overlay PNG sequence, so you have to re-export the video itself as well. This can become a lengthy process!

Hail To The Power User

One thing I’d like to call out about this product that won’t be talked about in most reviews is Garmin’s attitude towards advanced/power users. Many companies lock away the inner workings of their products in what often turns out to be a futile effort as users tend to reverse-engineer the fun stuff anyway. GoPro’s WiFi protocol has been mostly reverse-engineered, for instance, and there are a wide number of GoPro “hacks” (which turn out to mostly be undocumented config files) to enable features like long exposures.

Garmin, on the other hand, publishes documentation for controlling their VIRB cameras on their own VIRB Developer site, and VIRB Edit has an “Advanced Editing” button on its already pretty advanced gauge editor which opens up a JSON file in your favourite text editor alongside a PDF documenting the file format.

For most users, this means nothing. However, I love this attitude — I can customise my gauges to my heart’s content and write little apps to control my camera if I want, all using tools provided to me by Garmin.

Conclusion

Short Version

I’ve already - and I’m not joking - sold all of my GoPro cameras.

Long Version

I bought this camera within its first week of availability in Sweden, and unfortunately these days that means software niggles are to be expected. However, I’ve owned a number of Garmin devices (and still do) and they’ve a long history of continuing to improve their products over time. My four year old GPS unit still gets regular software updates, for instance. I have a very positive opinion of Garmin as a company — they make solid products and solid software, so I’m hopeful they’ll resolve the bugs I found.

Update January 2016: I’m happy that my faith in Garmin seems to have been well placed - the more problematic software issues have been fixed by updates.

I am rather concerned about the flaky connection between the camera and its USB cable, though. This is certainly a hardware issue — I’ll contact Gamin and see what they say.

Overall, though, I love this camera and have already sold all my GoPros. The combination of its superb build quality and extra data acquisition features are killer for me, and are a joy to have after years of lacklustre GoPro updates.

Hardware

Good

  • It feels like it’s built like a tank — I love the record switch in particular.
  • Lots of thought in the design — the moisture tray and port design stand out.
  • Lovely screen compared to the GoPro.
  • Paired with my OBD dongle and phone effortlessly.
  • Directly compatible with the GoPro ecosystem of mounts.

Bad

  • PAPER LAUNCH DAMNIT! Don’t show me a product I want then wait four months to start selling it!
  • Cable doesn’t fit snugly and disconnects when I move the camera. Hopefully this is a one-off thing.
  • One of the buttons feels weird. Again, hopefully a one-off niggle. May actually be as-designed. Garmin considers it ‘normal’.
  • Proprietary cable isn’t super great when you need an emergency charge in a world of micro USB. I see why they did it and, like Apple’s Lightning, the pros outweigh the cons most of the time.
  • Only one sticker in the box. I’m prepared to go full fanboy with this thing, and I only have one sticker?!

Software

Good

  • Great Mac citizen — you’ve no idea how many companies ship crappy “cross-platform” desktop software.
  • Gauges functionality covers all my uses, from great looking templates through to complete and total customisability.

Bad (as of August 2015)

  • Accelerometer data needs a low-pass filter — it’s unusably noisy when the camera is bolted to my car’s chassis. Fixed with firmware 3.70.
  • Audio glitch when transitioning between clips that’ve been cut up by the camera.

Missing Features

  • Ability to export a translucent video containing only the gauges so I can edit the source video in my preferred editor and keep the data overlays clean. Feature exists, but is slightly hidden. My fault!

June 21st, 2015

Secret Diary of a Side Project: In Reality, I've Only Just Started

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.


On March 27th 2013, I started an Xcode project called EOSTalk to start playing around with communicating with my new camera (a Canon EOS 6D) over its WiFi connection.

Over two years and 670 commits later, on June 5th 2015 (exactly a month late), I uploaded Cascable 1.0 to the App Store. Ten agonising days later, it went “In Review”, and seventeen hours after that, “Pending Developer Release”.

Late in the evening the next day, my wife, our dog, a few Twitter friends (thanks to Periscope) and I sat together by my desk and clicked the Release This Version button.

 

I absolutely meant to blog more in the three months since my last Secret Diary post, and I’m sorry if you’ve been looking forward to those posts. An interesting thing happened — I thought I’d have way more time for stuff like blogging after leaving my job and doing this fulltime, but I’ve ended up with way less. A strict deadline and a long issues list in JIRA made this a fulltime 9am-6pm job. So much for slacking off and playing videogames!

Fortunately, though, I still have a few things I want to write about and now I can slow down a bit, I should start writing here on a more frequent basis again.

Statistics

Some stats for Cascable 1.0 for the curious:

Objective-C Implementation 124 files, 23,000 lines of code
C/Objective-C Header 133 files, 2,400 lines of declaration
Swift None
Commits 670

Now, lines of code is a pretty terrible metric for comparing projects, but here’s the stats for the Mac version of Music Rescue, the last app of my own creation that brought in the Benjamins:

Objective-C Implementation 154 files, 24,000 lines of code
C/Objective-C Header 169 files, 4,100 lines of declaration
Swift This was 2008 — I barely had Objective-C 2.0, let alone Swift!

As you can see, the projects are actually of a similar size. It’s a completely meaningless comparison, but it’s interesting to me nonetheless. Back in 2008 I considered Music Rescue a pretty massive project, something I don’t think about Cascable. I guess my experience with the Spotify codebase put things in perspective.

You can check Cascable out here. You should totally buy a copy!

Celebrating

At NSConference 7 I gave a short talk which was basically Secret Diary: On Stage, in which I discussed working on this project.

 

In that talk, I spoke about a bottle of whiskey I have on my desk. It’s a bottle of Johnnie Walker Blue Label, and at £175 it’s by far the most expensive bottle of whiskey I’ve bought. When I bought it, I vowed it’d only be opened when a real human being that wasn’t my friend (sorry Tim!) exchanged money for my app.

Releasing an app is reward in itself, but there’s nothing tangible about it. Having that physical milestone there to urge me on really was helpful when I was on hour four of debugging a really dumb crash, for instance.

This weekend, that bottle was opened. It tasted like glory.


May 1st, 2015

Build-Time CFBundleVersion Values in WatchKit Apps

When building a WatchKit app, you’ll likely encounter this error at some point:

error: The value of CFBundleVersion in your WatchKit app’s Info.plist (1) does not match the value in your companion app’s Info.plist (2). These values are required to match.

Easy, right? We just make sure the values match. But… what if we’re using dynamically generated bundle version numbers derived from, say, the number of commits in your git repository? Well, we just go to the WatchKit app’s target in Xcode, click the “Build Phases” tab and… oh. There isn’t one.

So, if we’re required to have our WatchKit app mirror the CFBundleVersion of our source app and we’re generating that CFBundleVersion at build time, what do we do? First, we wonder why this mirroring isn’t automatic. Second, we try to modify the WatchKit app’s Info.plist file from another target before realising that it screws with its code signature. Third, we come up with this horrible workaround!

The Horrible Workaround

The workaround is to generate a header containing definitions for your version numbers, then use Info.plist preprocessing to get them into your WatchKit app’s Info.plist file.

This little tutorial assumes you already have an Xcode project with a set up and working WatchKit app.

Step 1

Make a new build target, selecting the “Aggregate” target type under “Other”.

Step 2

In that new target, create a shell script phase to generate a header file in a sensible place that contains C-style #define statements to define the version(s) as you see fit.

My example here generates two version numbers (a build number based on the number of commits in your git repo, and a “verbose” version that gives a longer description) then places the header into the build directory.

 1GIT_RELEASE_VERSION=$(git describe --tags --always --dirty --long)
 2COMMITS=$(git rev-list HEAD | wc -l)
 3COMMITS=$(($COMMITS))
 4
 5mkdir -p "$BUILT_PRODUCTS_DIR/include"
 6
 7echo "#define CBL_VERBOSE_VERSION ${GIT_RELEASE_VERSION#*v}" > "$BUILT_PRODUCTS_DIR/include/CBLVersions.h"
 8echo "#define CBL_BUNDLE_VERSION ${COMMITS}" >> "$BUILT_PRODUCTS_DIR/include/CBLVersions.h"
 9
10echo "Written to $BUILT_PRODUCTS_DIR/include/CBLVersions.h"

The file output by this script looks like this:

1#define CBL_VERBOSE_VERSION a6f5bd0-dirty
2#define CBL_BUNDLE_VERSION 1

Step 3

Make your other targets depend on your new aggregate target by adding it to the “Target Dependencies” item in the target’s “Build Phases” tab. You can add it to all the targets that you’ll use the version numbers in, but you’ll certainly need to add it to your WatchKit Extension target.

Step 4

Xcode tries to be smart and will build your target’s dependencies in parallel by default. However, this will mean that your WatchKit app will be built at the same time as the header is being generated but aggregate target, which will often result in build failures due to the header not being available in time.

To fix this, edit your target’s scheme and uncheck the “Parallelize Build” box in the “Build” section. This will force Xcode to wait until the header file has been generated before moving on.

Step 5

Edit the build settings in your targets as follows:

  • Preprocess Info.plist File should be set to Yes.
  • Info.plist Other Preprocessor Flags should be set to -traditional.
  • Info.plist Preprocessor Prefix File should be set to wherever your generated header file has been placed. In my case, it’s ${CONFIGURATION_BUILD_DIR}/include/CBLVersions.h.

Step 6

Finally, change the values in your Info.plist files to match the keys in your generated header file. In my case, I set CFBundleVersion (also known as Bundle Version or Build depending on where you’re looking in Xcode) to CBL_BUNDLE_VERSION.

Step 7

Go to the Apple Bug Reporter and ask (nicely) they give us build phases back for WatchKit apps. You can dupe mine (Radar #20782873) if you like.

Step 8


Success!

Conclusion

This is horrible. We need to disable parallel builds and generate intermediate headers and all sorts of nastiness. Hopefully we’ll get build phases back for WatchKit apps soon!

You can download a project that implements this tutorial here.


March 24th, 2015

NSConference 7

“I checked the version of your presentation with the video in it, and it works fine. Shall we just use that one, then?”

Panic set in, again. Scotty was already onstage and in the process of introducing me, so I had to think fast. I’d been accepted to give a “blitz talk” — that is, a short, 10-minute long presentation — at NSConference this year, and I’d put a little video clip that at best could be described as “stupid” into my slides. I thought it was funny, but I was so worried that it’d be met with a stony silence by the hundreds of attendees that I’d also provided a copy without the video.

At least it’ll be an interesting story to tell, I thought to myself, and confirmed that I’ll use the version with the video before stepping out into the blinding lights of the stage.

Here we go!


NSConference has always been about community. I’ve been fortunate enough to attend a number of them over the years, following it around the UK from Hatfield to Reading to Leicester. I’ve met a number of friends there, and it’s always inspiring. The mix of sessions normally has a fairly even distribution of technical and social topics, and this year was no exception — some fantastic speakers gave some wonderfully inspiring talks that really touched close to home, and others gave some fascinating technical talks on the old and the new.

Rather than list them now, I’m going to do a followup post when the NSConference videos are released that’ll link to my favourite talks and discuss why I found them so great.

However, the talks are only half of it. I’m pretty shy around new people, and my typical conference strategy is to sit with people I already know during the day, then hide in a corner or my hotel room during the evenings. This time, however, I was determined to at least try to make friends, and with little effort I found myself speaking to so many new people I can barely remember them all. Everyone was so friendly and so supportive, and I had a huge number of really interesting conversations with people from all over the world.


A joke is a great way to break the ice, someone once said. I start with “The lunches aren’t so light if you go back for thirds, are they?”1, referencing the fact we were given a light lunch that day in preparation for the banquet later. Sensible chuckle from the audience. Alright, maybe my video won’t flop after all!

“Hello everyone,” I continued, “My name is Daniel and for the past four years I’ve been working as a Mac and iOS developer at Spotify. And four days ago — last Thursday — I left to become an Indie developer. Today, I’m—”

I was interrupted by a huge round of applause that went on long enough to mask my stunned silence. This is what NSConference is about: hundreds of friends and strangers coming together to support one another in whatever we’re doing. One of the larger challenges in what I’m doing is the solitude — I left a job where I’m interacting with a lot of people every day to one where I sit alone in a corner of my house. As I stand on the stage, the applause lifts me up and drives home that while I may physically be on my own, I have a huge community of peers that are right behind me and are willing me to succeed.

As the applause dies down, I do a “Thank you, goodnight!” joke to move around the stage and regain my composure. Thirty seconds later, we arrive at my stupid video.

My thumb hovers over the button to advance the slide and start the video. If I double-click it, it’ll skip the video! A moment’s hesitation…

Click.

That two second video clip got what I think was one of the biggest laughs of the conference, and I was so relieved I even started laughing at it myself.

Right! Time to get my shit together — I’m supposed to be sharing information!


At the end of the conference, heartfelt things were said onstage as the sun set on the final NSConference — there wasn’t a dry eye in the house. During this, staff handed a glass of whiskey to every single person in the audience. At the very end, Scotty held a toast, then left the stage as we clinked glasses.

The last NSConference came to a close with the sound of hundreds of people clinking glasses in toast to seven years of incredible experiences. The sound resonated around the hall for a number of minutes before eventually subsiding, and is something I’ll never forget.

As a tribute to the conference and the work the organisers put in, the community is banding together to raise money for Scotty’s favourite cause, Water.org, which has the goal of providing clean water to everyone who needs it. You can donate at the NSConference 7 fundraiser page.

Clink.

  1. It should be noted that my talk wasn’t really scripted so I’m recounting what I said from memory. When the video is released it’ll likely prove that I’m misremembering my exact wording. The gist will be the same, though.


March 10th, 2015

Secret Diary of a Side Project: The Refactor From Hell


Why I need a designer: Exhibit A.

THIS BUTTON.

This innocuous little button cost me a week. Let that settle in. A week.

It’s a simple enough premise — when the user gets presented a dialog like this, you should give them a way out. Presenting a button-less dialog is all kinds of scary — what if the camera crashes and doesn’t give the expected response, or any response at all? Sure, I can guard against that, but still.

So, it’s settled! I’ll implement a Cancel button so the user can back out of pairing with their camera. What a completely logical and easy thing to do.

PROGRAMMING!

Here’s the problem I faced:

Typically, when you connect to a camera you send it a message to initialise a session, then wait for a success response. This normally takes a small number of milliseconds, but when the camera is in pairing mode it won’t respond at all until the user has gone through a few steps on the camera’s screen.

All we need to do is sever the connection to the camera while we’re waiting, right? Easy enough. However, the architecture of my application has it working with the camera in a synchronous manner, writing a message then blocking until a response is received. All this is happening on a background thread so it doesn’t interfere with the UI, and since the camera has a strict request-response pattern, it works well enough. However, in this case, I can’t sever the connection on the camera’s thread because it’s completely blocked waiting for a response. If I try to do this from a separate thread, I end up with all sorts of nasty state — dangling sockets and leaked objects.

The solution to this sounds simple — instead of doing blocking reads, I should schedule my sockets in a runloop and use event-based processing to react when responses are received. That way, nothing will ever be blocked and I can sever the connection cleanly at any point without leaving dangling sockets around.

Easy!


Seven hours later I’m sitting at my desk with my head in my hands, wishing I’d never bothered. It’s 11pm, and later my wife tells me she’d approached me to come play video games but decided I looked so grumpy I’d be best left alone. I have no idea why it’s not working. I’m sending the exact same bytes as I was before, and getting the same responses. It actually works fine until traffic picks up — as soon as you start to send a lot of messages, random ones never get a response.

Well after midnight, I throw in the towel. I’d been working at this one “little” problem nonstop for eight hours, my code was a huge mess and I almost threw away the lot.

“I’m such an idiot,” I told my wife as I got into bed, “I even wrote about this on my blog, using the exact code I’m working on as an example”.

Yup, this is that old but reliable code I wrote about a couple of months ago. The class I said I’d love to refactor but shouldn’t because it worked fine.

One way of proving a hypothesis, I suppose.

As I was drifting off to sleep, I had an idea. I decided it could wait until the morning.


I slumped down into my chair the next morning and remembered my idea. Twenty minutes later, it was working like a charm1.

Sigh.

So, now it’s working and a darn sight better looking than my old code. However, the two years’ worth of confidence and proven reliability that I had with the old code has vanished — it seems to work, yes, but how can I be sure? Maybe there’s bugs in there that haven’t shown themselves yet.

If You Don’t Have Experience, You Need Data

I’ve been writing unit tests here and there for parts of my app where it makes sense.

“Business logic” code for the app is simple enough to test — instantiate instances of the relevant classes and go to town:

1CBLShutterSpeed *speed = [[CBLShutterSpeed alloc] initWithStopsFromASecond:0.0];
2XCTAssert(speed.upperFractionalValue == 1, @"Failed!");
3XCTAssert(speed.lowerFractionalValue == 1, @"Failed!");
4
5CBLShutterSpeed *newSpeed = [speed shutterSpeedByAddingStops:-1];
6XCTAssert(newSpeed.upperFractionalValue == 1, @"Failed!");
7XCTAssert(newSpeed.lowerFractionalValue == 2, @"Failed!");

Parsing data given back to us by the camera into objects is a little bit more involved, but not much. To achieve this, I save the data packets to disk, embed them in the test bundle and load them at test time. Since we’re testing the parsing code and not that the camera gives back correct information, I think this is an acceptable approach.

 1-(void)test70DLiveViewAFRectParsing {
 2    NSData *rectData = [NSData dataWithContentsOfFile:[self pathForTestResource:[@"70D-LiveViewAFRects-1.1.1.dat"]];
 3    XCTAssertNotNil(rectData, @"afRect data is nil - possible integrity problem with test bundle");
 4
 5    NSArray *afAreas = [DKEOSCameraLiveViewAFArea liveViewAFAreasWithPayload:rectData];
 6    XCTAssertNotNil(afAreas, @"afRects parsing failed");
 7
 8    XCTAssertEqual(31, afAreas.count, @"Should have 31 AF areas, got %@", @(afAreas.count));
 9
10    for (DKEOSCameraLiveViewAFArea *area in afAreas) {
11        XCTAssertTrue(area.active, @"Area should be active");
12        XCTAssertFalse(area.focused, @"Area should not be focused");
13    }
14}

Alright, so, how do we go about testing my newly refactored code? It poses a little bit of a unique problem, in that my work with this camera is entirely based on clean-room reverse engineering — I don’t have access to any source code or documentation on how this thing is supposed to work. This means that I can’t compile the camera’s code for another platform (say, Mac OS) and host it locally. Additionally, the thing I’m testing isn’t “state” per se — I want to test that the transport itself is stable and reliable, that my messages get to the camera and its responses get back to me.

This leads to a single conclusion: To test my new code, I need to involve a physical, real-life camera.

Oh, boy.


Unit testing best practices dictate that:

  • State isn’t transferred between individual tests.
  • Tests can execute in any order.
  • Each test should only test one thing.

The tests I ended up writing fail all of these practices. Really, they should all be squished into one test, but a single test that’s 350 lines long is a bit ungainly. So, we abuse XCTest to execute the tests in order.

First, we test that we can discover a camera on the network:

 1-(void)test_001_cameraDiscovery {
 2    XCTestExpectation *foundCamera = [self expectationWithDescription:@"found camera"];
 3
 4    void (^observer)(NSArray *) = ^(NSArray *cameras) {
 5        XCTAssertTrue(cameras.count > 0);
 6        _camera = cameras.firstObject;
 7        [foundCamera fulfill];
 8    };
 9
10    [[DKEOSCameraDiscovery sharedInstance] addDevicesChangedObserver:observer];
11
12    [self waitForExpectationsWithTimeout:30.0 handler:^(NSError *error) {
13        [[DKEOSCameraDiscovery sharedInstance] removeDevicesChangedObserver:observer];
14    }];
15}

…then, we make sure we can connect to the found camera:

 1-(void)test_002_cameraConnect {
 2    XCTAssertNotNil(self.camera, @"Need a camera to connect to");
 3    XCTestExpectation *connectedToCamera = [self expectationWithDescription:@"connected to camera"];
 4
 5    [self.camera connectToDevice:^(NSError *error) {
 6        XCTAssertNil(error, @"Error when connecting to camera: %@", error);
 7        [connectedToCamera fulfill];
 8    } userInterventionCallback:^(BOOL shouldDisplayUserInterventionDialog, dispatch_block_t cancelConnectionBlock) {
 9        XCTAssertTrue(false, @"Can't test a camera in pairing mode");
10    }];
11
12    [self waitForExpectationsWithTimeout:30.0 handler:nil];
13}

(I’m a particular fan of that XCTAssertTrue(false, … line in there.)

Next, because we’re talking to a real-life camera, we need to make sure its physical properties (i.e., ones we can’t change in software) are correct for testing:

 1-(void)test_003_cameraState {
 2    XCTAssertNotNil(self.camera, @"Need a camera to connect to");
 3    XCTAssertTrue(self.camera.connected, @"Camera should be connected");
 4
 5    XCTAssertEqual([[self.camera valueForProperty:EOSPropertyCodeAutoExposureMode] intValue], EOSAEModeManual,
 6                   @"Camera should be in manual mode for testing.");
 7
 8    XCTAssertEqual([[self.camera valueForProperty:EOSPropertyCodeLensStatus] intValue], EOSLensStatusLensAvailable,
 9                   @"Camera should have an attached lens for testing");
10
11    DKEOSFileStorage *storage = self.camera.storageDevices.firstObject;
12    XCTAssertTrue(storage.capacity > 0, @"Camera should have an SD card inserted for testing.");
13    XCTAssertTrue(storage.availableSpace > 100 * 1024 * 1024, @"Camera storage should have at least 100Mb available for testing.");
14}

Once the camera is connected and verified to be in an agreeable state, we can start testing.

  • In order to test against the case of large amounts of traffic causing dropouts that drove me to insanity that night, I run through every single valid value for all of the exposure settings (ISO, aperture, shutter speed) as fast as I possibly can.

  • To test event processing works correctly, I test that streaming images from the camera’s viewfinder works.

  • To test filesystem access, I iterate through the camera’s filesystem.

  • To test commands, I take a photo.

  • To test that large transfers work, I download the photo the previous test took - about 25Mb on this particular camera.

  • And finally, I test that disconnecting from the camera works cleanly.

As you can see, this is a pretty comprehensive set of tests — each one is meticulous about ensuring the responses are correct, that the sizes of the data packets received match the sizes reported by the camera, etc — they’re essentially an automated smoke test.

The next challenge is to get these to run without human intervention. I can’t just leave the camera on all the time — if it doesn’t receive a network connection within a minute or two of powering on it’ll error out and you need to restart the Wifi stack to connect again — something not possible without human intervention. Perhaps a software-controlled power switch would allow the tests to power on and off the camera at will. However, that’s a challenge for another day.

I TOLD YOU SO, DAMNIT

So. In an earlier post I talked about being restrained when you think about refactoring code, and my ordeal here is exactly why. At the beginning it looked simple enough to do, but I ended up losing way too much time and way too much sleep over it, and when it finally appeared to work I had no data on whether it was any good or not. If I’d gone through all of that with no good reason it would’ve been a complete waste of time and energy.

But! Thanks to all this work, you can now cancel out of camera pairing from your iOS device! It’s a disproportional amount of work for a single button, but that’s the way software development goes sometimes — no matter how obvious the next task might look, tomorrow’s just a mystery, and that’s okay. It’s what makes it fun!

Plus, I now have a decent set of smoke tests for communicating with a real-life camera, which is something I’ve been wanting for a long time — a nice little silver lining!

Epilogue

After implementing all this, I decided to have a look at how the camera’s official software approached this problem, UI-wise.

It looks like a floating panel, but it behaves like a modal dialog. There’s no way to cancel from the application at all and if you force quit it, the software ends up in a state where it thinks it isn’t paired and the camera thinks it is paired, and the two will flat-out not talk to one another.

The mobile app can’t possibly be this bad, I thought, and went to experiment. There’s no screenshot here because there is no UI in the iOS app to help with pairing at all — it just says “Connecting…” like normal and you need to figure out that you need to look at the camera on your own.

It’s like they don’t even care.


Next time on Secret Diary of a Side Project, we’ll talk about how to make the transition to working full-time on your side project at home in a healthy way, both mentally and physically.

  1. The problem, if you’re interested, is that the camera throws away any messages received while it’s processing a prior message. This was accidentally worked around in my old code by blocking while waiting for a response. The solution was to maintain a message queue and disallow a message to be sent until a response to the previous one has been received.


February 25th, 2015

Rebrand

Welcome to my new blog!

It’s like my old blog, but with a much lighter appearance and hopefully provides a nicer reading environment. It should also be faster, and much better on mobile. As well as nearly all of my old posts, I’ve added a spiffy new More About Me page with a succinct version of my life story, if you’re interested. I’ve also spruced up the My Apps and Archive pages.

I’ve tried my very best to make sure all the links from my old blog work with this new one, but if you spot anything amiss I’d appreciate you getting in touch with me on Twitter or emailing blog at this domain and letting me know.

I actually ended up going through an interesting journey while putting this together. To make sure that every post was formatted properly in the new engine, I read through every single one of my posts all the way back to 2004 — and let me tell you, ten years ago I was an idiot. I seriously considered removing all the posts I found embarassing, but in the end I decided that the journey is just as important as the destination, so they stayed. The only posts I removed where ones that were nothing but links to now-defunct websites.

Technical Details

My previous blog was generated by Octopress, which is a blogging product built on top of Jekyll. However, Octopress’ main selling point for newbies to this whole thing (i.e., me a few years ago) is also its biggest drawback — it’s a complete blogging platform out-of-the-box. This makes diving in and customising it extremely daunting, rather like being presented with a car, a spanner and being told to replace the clutch plate. I did manage to customise a couple of little things on my old site, but not much.

So, a couple of weeks ago I sat here, new theme in hand, ready to try to put it into Octopress. It was soon apparent that I’d basically have to rip the entire thing apart to fully understand what was going on, and if I was going to do that, why not look at alternatives?

I’d recently tried out another static site compiler called nanoc for another project of mine, and really liked it. Where Octopress provides a fully featured blog out-of-the-box, nanoc provides nothing. The default site is literally a white “Hello World” page with no CSS at all. While this is daunting at first, it’s actually quite liberating — it took me about a week to put this whole thing together from scratch, and I now know every intimate detail about it which makes me really comfortable customising it in any way I need.

How The Site Is Put Together

  • There are three “things” in this entire site:

    1. Posts. These are markdown files.

    2. Pages. These are HTML fragments.

    3. Special items like the RSS feed.

  • Posts are put through a markdown parser (kramdown) then wrapped with the site’s template.

  • Pages are rendered pretty much as-is with nothing special going on other than being wrapped in the template. These include the About, Apps, and Archive pages, as well as the site’s home page.

  • When the template is rendered, pages containing the in_menu tag are placed in the site menu. This allows me to have “hidden” pages (like the 404 page) without any extra work.

  • Binary files (images and the like) live in a submodule to the blog’s source repo. Yes, git isn’t great at binaries (and there’s over 300Mb of them for this site), but it works alright for my needs. These files get copied to the output directory with no processing at all.

I’m really pleased with the results of my work, and it gives me greater control over my presence on the web. Over time, I hope to add more features to the site as I work on my web skills.


February 13th, 2015

Secret Diary of a Side Project: Getting To 1.0

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.

In this post, I’m going to talk about something that strikes fear into the heart of any programmer: planning. You won’t get to 1.0 without it!


If you’re anything like me, it’s likely that you have some form of issue tracker for your side project, detailing various bugs to be fixed and features to be added. In my instance, that ended up being a sort of rolling affair — I’d fix a bunch of things, see that my issue list was diminishing, then spend a while with the app prodding around until I found more things to add to the tracker. This was a perfectly acceptable approach in the beginning.

However, shortly after I committed to do this full-time, I realised I had no longer-term plan. So, I sat down and decided that I’d try to release 1.0 relatively soon after going full-time, allowing plenty of time to gain feedback from real photographers. You see, I have tons of feature ideas but until photographers tell me what they think, I don’t really have any data to tell me if these ideas are any good. Releasing a 1.0 early allows the app to be shaped by its users, rather than my idea of what I think users want.

This is the result, based on nothing more than a loosey-goosey feeling of the state of the project so far:

Milestone Date
Start collecting beta invites 2015-03-10
First beta release 2015-03-24 → 28
Post-beta questionnaire 2015-04-28
1.0 App Store submit 2015-05-05

Of course, I’ll be amazed if those deadlines stick. Still! It’s great to have something to aim for. I felt much better about myself.

…for a while.

A few days later I looked at those dates and started to feel a bit of dread. That March 10th date is when I really commit to releasing something – it’s when my marketing starts! I had no idea if I’d be able to do it or not. Eventually I realised the problem — the tasks in my issue tracker didn’t connect my project from where it is now to that 1.0 on May 5th.

It’s time to do some serious planning!

Shhh… Don’t Say “Agile”

I have a love-hate relationship with Agile. My first exposure to it was when I started at Spotify in early 2011. The company was very small at the time, and we were using… scrum, I think? I forget. Anyway, as the company grew the thing we were using turned out not to work so well. So, we tried a new thing. Then another new thing. Then the first new thing again but with a slight modification. Eventually, I flat-out stopped caring. “Just tell me how you want me to stick the notes on the wall, and I’ll be fine”, I’d say.

Fast-forward a few years, and a fellow named Jonathan joined the company. He’d written a book on Agile and handed out some copies. I took one with moderate-at-best enthusiasm, which then sat on my desk gathering dust. A few weeks later, he did a talk on a thing he called the “Inception Deck”, a method of planning out your product at its inception stages.

“This is perfect for Cascable!” I thought, and started furiously scribbling notes. After his talk, I told him I thought it was great. “Oh, really? I’m happy you think so — it’s all from my book though.”

At that point, I returned my copy of his book and bought an eBook of it instead, partly because I feel uncomfortable furthering my own app on something my employer paid for, but mainly because I like supporting good work.

I feel really uncomfortable plugging things on this blog — it’s not what it’s for. However, Jonathan’s book has immensely helped me as an independent developer trying to get an app out into the world, and a good deal of this post is inspired by things I learned from it. It’s called The Agile Samurai: How Agile Masters Deliver Great Software, and you can find it here at the Pragmatic Bookshelf.

Step One: Figure Out What You Want To Sell

If you were planning your app from the beginning, you’d start by planning what you want your 1.0 to actually be. A side project is completely the opposite of that — you just create a new project and go, plucking ideas out of your head and going with them.

However, that isn’t sustainable if you want to ship a quality product, no matter how much you claim to “live in the code”. At some point you’re going to have to stop and figure this stuff out, which can be pretty daunting if you’re just chugging along in your code editor.

The “Inception Deck” I spoke about earlier really helped me with this. I won’t go into it in detail — it’s in the book I mentioned above as well as on the author’s blog – but it’s basically a set of small tasks you can do to really help kick a project off in the right direction.

Now, I’m not kicking off a project at all, and some of the items in the Inception Deck are geared a bit towards teams working on one project rather than the lone developer, but still — if some of the tasks help bring clarity to my project, I’m all for it!

Alright, it’s time to jump out of development and pretend I’m doing this properly by doing the planning at the beginning. I cherry-picked the most relevant tasks from the Inception Deck, and here’s what I came up with, more or less copy and pasted from my Evernote document:

The Inception Deck for Cascable 1.0

Why Are We Here?

This task helps establish why this project exists to start with.

The applications that come with WiFi-enabled cameras tend to be pretty terrible. We can do better, and make a WiFi connection an indispensable tool on a camera rather than a toy.


Elevator Pitch

This is a fairly standard thing in the software world these days. Describe the product in 30 seconds.

For photographers who need intelligent access to their camera and photos in the field, Cascable is an iOS app that connects to the camera over WiFi and opens up a world of possibilities. Unlike current apps, Cascable will develop and evolve to become an easy-to-use and indispensable tool for amateur and professional photographers alike.


The Not List

This is one is new to me and was incredibly helpful. Defining what isn’t in scope for 1.0 can be as useful as defining what is.

In Scope for 1.0 — Things that will definitely make it.

  • Remote control of the basics: exposure control, focus and shutter.
  • Useful overlays for the above. Thirds grid, histogram, AE mode, AF info.
  • Calculating exposure settings for ND filters and astrophotography.
  • Saving calculation presets.
  • Viewing photos on the camera in the list.
  • Downloading photos to the device.
  • Viewing downloaded photos fullscreen, deleting downloaded photos.
  • Sharing downloaded photos and opening them in another app.
  • Apple Watch widget for triggering the shutter.

Not In Scope for 1.0 — Things that definitely won’t make it.

  • Cameras that aren’t Canon EOS cameras.
  • Cloud functionality.
  • Automatic downloading.
  • Support for videos in Files mode.

Unresolved — Things I’m not sure about.

  • Second screen mode for AppleTV, etc.
  • Applying Calculations Mode results to the camera.

What Keeps Me Up At Night

This exercise was also new to me. What things should you worry about, and which of those are beyond your control?

  • Not having dedicated QA.
  • Keeping “on the rails” and getting everything done properly and on time.
  • App Store rejection.
  • Canon getting uppity.

The first two of those are things I know can fix myself already. The fear of App Store rejection is pretty much life as normal for iOS development, so there’s no real need to worry about that as long as I’m familiar with Apple’s guidelines and don’t bump into the edges of the (admittedly sometimes vague) rules. That last one is more nuanced, and something I need to get legal advice about. That is where I should concentrate my energy into gaining knowledge.

Conclusion

So, what’s the benefit of writing all this down? Well, I’ve understood what this project is about the whole time, but succinctly describing it to someone else is a bit of a challenge. Not having answers to questions like “Will you support X camera?” or “Can I work with video?” was a bit embarrassing. Now, I can answer “Not at 1.0, no.” with confidence. Sure, I don’t need to answer to anyone else while making my own app, but being able to answer questions to others with confidence does great things for your own internal confidence, too.

Step Two: Fill The Gap Between Now And Then

Alright, so I’ve got an issue tracker full of tasks and a ship date. I also have a general overview of what Cascable 1.0 should be with the Inception Deck. However, I still haven’t brought all this together to form a set of directions to take me from where I currently am on the project to where I want to be for 1.0.

The problem is, as the lone developer of an app, I’m just in too deep. I can’t see the wood for the trees, and various other clichéd sayings about not having a clear view of the whole situation. I came up with all that stuff above completely on my own. How do I know if it’s any good, or just pure garbage?

What I need is an outsider.


Don’t be fooled, she packs a mean punch.

Meet Alana (that’s “Ah-lay-na”, not “Ah-lar-na”), who has agreed to be Cascable’s Product Owner while I get to 1.0. She’s also my wife, so I suppose she’s also the Product Owner of, well, me. She’s agreed to have meetings with me once every two weeks, splitting my journey into Agile-like sprints. I’ll get to explain why I didn’t meet any missed targets and why, which targets I did meet and what targets I plan to meet in the next two weeks.

However, we’re getting ahead of ourselves — my current problem is that even though I have a nice Inception Deck I don’t know exactly what 1.0 should be, never mind how to get to it. Alana also had a concern: “How can I be your Product Owner if I don’t know what the product is?”

It turns out that my problem and her concern can be solved in one step. The reason my issue tracker doesn’t connect between the current state of the project and 1.0 is because I just picked ideas out of my head when I ran out of tickets in my issue tracker. The Inception Deck helps, but it’s still a bit wishy-washy — I need a well thought-out master list of stories to work against. A good way to have Alana know the product? Have her make the list!

Business Time

One Saturday, we sat opposite one another at the dining room table with a pile of Post-It notes, some pens and a camera.

“Alright, “ I said, “You’ve just bought a camera and have realised how crappy the supplied app is. You’re going to hire me to write you an app that enhances your photography experience. I want you to tell me what it should do, and we’ll write each thing down on a note.”

She picked up her camera, prodded at it a bit then said “Erm… I guess it should connect to camera, right?”

Great! Our first story — but this was the very first page, not where the storyline ends. We spent the next hour talking about photography and she made feature suggestions along the way, mainly based on her previous photography experiences. I didn’t make a single contribution to the notes, other than to ask “Why do you want the app to do that?” to make sure that information got written down. Each idea got a note, and after an hour we had a fairly sizeable pile.

After we were done, I quickly added some more notes that contained features I’d already written but she didn’t independently come up with, then started the second half of the exercise:

“Now, I want you to put these all in a line in order of importance to you.”

Again, I didn’t interrupt other than to help when she wasn’t sure. “Should this go higher than Delete photos from the camera or lower?”

This is what we ended up with:

For the first time, I sat back and actually studied the notes. I was floored. In front of me was a complete journey to 1.0 and beyond. Features I hadn’t even thought of were high up the list, and of course they were — they were so stupidly obvious. Conversely, features I’d spent a fair amount of time working on (in particular, a “Night Mode” for the app) were right down towards the bottom, probably past the cutoff point for 1.0, and looking at the list I completely agreed with it being down there. In fact, I couldn’t really argue with the order of the notes at all once I heard the reasoning behind Alana’s chosen position.

I’ve been working on this thing for well over a year and a half now, and two hours with someone with fresh eyes completely changed the project and set it off on the journey to 1.0 with a flying start.

What’s better, every single outstanding bug or feature in my issue tracker fit into one of these Post-It stories perfectly. The app doesn’t handle a camera in pairing mode quite right? Well, that goes in the “Connect to camera” story. Oh, crap — that’s the most important story of them all, I should fix that right away!

Step Three: There’s No Step Three!

This is an absolute lie. Step three is the hardest one of all. Now you have a spiffy plan, you have to execute it.

My project isn’t a “side project” any more. Far from it — it has deadlines, a prioritised story list, and a product owner. Between the start of this post and now, it’s transformed into a fully-fledged software project, and I’m letting it down by only working on it in my spare time. Four weeks from today, however, that’s all going to change!


Next time on Secret Diary of a Side Project, we’ll swing back to some coding and talk about what happens when you ignore my advice and decide to refactor a piece of code that really doesn’t need it.


February 8th, 2015

Stripping Unwanted Architectures From Dynamic Libraries In Xcode

Since iOS 8 was announced, developers have been able to take advantage of the benefits of dynamic libraries for iOS development.

For general development, it’s wonderful to have a single dynamic library for all needed architectures so you can run on all your devices and the iOS Simulator without changing a thing.

In my project and its various extensions, I use Reactive Cocoa and have it in my project as a precompiled dynamic library with i386 and x86_64 slices for the Simulator, and armv7 and arm64 for devices.

However, there’s one drawback to this approach - because they’re linked at runtime, when a dynamic library is compiled separately to the app it ends up in, it’s impossible to tell which architectures will actually be needed. Therefore, Xcode will just copy in the whole thing into your application bundle at compile time. Other than the wasted disk space, there’s no real drawback to this in theory. In practice, however, iTunes Connect doesn’t like us adding unused binary slices:

So, how do we work around this?

  • We could use static libraries instead. However, with multiple targets and extensions in my project, it seems silly to bloat all my executables with copies of the same libraries.

  • We could compile the library from source each time, generating a new dynamic library with only the needed architectures for each build. A couple of things bother me about this - first, it seems wasteful to recompile all this non-changing code all the time, and the second is that I like to keep my dependencies static, and making new builds each time means I’m not necessarily running stable code any more, particularly if I start mucking around in Xcode betas. What if a compiler change causes odd bugs in the library? It’s a very rare thing to happen, but it does happen, and I don’t know the library’s codebase well enough to debug it.

  • If we don’t have the source to start with, well, we’re kinda out of luck.

  • We could figure out how to deal with this at build-time, then never have to think about it again. This sounds more like it!

Those Who Can, Do. Those Who Can’t, Write Shell Scripts

Today, I whipped up a little build-time script to deal with this so I never have to care about it again.

In my project folder:

$ lipo -info Vendor/RAC/ReactiveCocoa.framework/ReactiveCocoa

→ Architectures in the fat file: ReactiveCocoa are:
    i386 x86_64 armv7 arm64

After pushing “build”:

$ lipo -info Cascable.app/Frameworks/ReactiveCocoa.framework/ReactiveCocoa

→ Architectures in the fat file: ReactiveCocoa are:
    armv7 arm64

Without further ado, here’s the script. Add a Run Script step to your build steps, put it after your step to embed frameworks, set it to use /bin/sh and enter the following script:

 1APP_PATH="${TARGET_BUILD_DIR}/${WRAPPER_NAME}"
 2
 3# This script loops through the frameworks embedded in the application and
 4# removes unused architectures.
 5find "$APP_PATH" -name '*.framework' -type d | while read -r FRAMEWORK
 6do
 7    FRAMEWORK_EXECUTABLE_NAME=$(defaults read "$FRAMEWORK/Info.plist" CFBundleExecutable)
 8    FRAMEWORK_EXECUTABLE_PATH="$FRAMEWORK/$FRAMEWORK_EXECUTABLE_NAME"
 9    echo "Executable is $FRAMEWORK_EXECUTABLE_PATH"
10
11    EXTRACTED_ARCHS=()
12
13    for ARCH in $ARCHS
14    do
15        echo "Extracting $ARCH from $FRAMEWORK_EXECUTABLE_NAME"
16        lipo -extract "$ARCH" "$FRAMEWORK_EXECUTABLE_PATH" -o "$FRAMEWORK_EXECUTABLE_PATH-$ARCH"
17        EXTRACTED_ARCHS+=("$FRAMEWORK_EXECUTABLE_PATH-$ARCH")
18    done
19
20    echo "Merging extracted architectures: ${ARCHS}"
21    lipo -o "$FRAMEWORK_EXECUTABLE_PATH-merged" -create "${EXTRACTED_ARCHS[@]}"
22    rm "${EXTRACTED_ARCHS[@]}"
23
24    echo "Replacing original executable with thinned version"
25    rm "$FRAMEWORK_EXECUTABLE_PATH"
26    mv "$FRAMEWORK_EXECUTABLE_PATH-merged" "$FRAMEWORK_EXECUTABLE_PATH"
27
28done

The script will look through your built application’s Frameworks folder and make sure only the architectures you’re building for are present in each Framework.

Much better! Now I can throw fat dynamic libraries at my project that contain all the architectures I’ll ever need, and my build process will deal with which architectures are appropriate at any given moment.